The People Deliberately Killing Facebook

Edward Zitron 22 min read

Enjoy this post? Why not try the podcast version? Please download it, and also all the other episodes.


Over the last decade, few platforms have declined quite as rapidly and visibly as Facebook and Instagram. What used to be apps for catching up with your friends and family are now algorithmic nightmares that constantly interrupt you with suggested content and advertisements that consistently outweigh the content of people that you choose to follow. 

Conversely, those running Facebook groups routinely find that their content isn’t even being shown to those who choose to follow them thanks to Meta’s outright abusive approach to social media where the customer is not only wrong, but should ideally have little control over what they see.

Over the next two newsletters, I’m going to walk you through the decline of Facebook and Instagram, starting with the events that led to its decay and those I believe are responsible for turning the world’s most popular consumer apps into skinner boxes for advertising agencies.

In the case of Facebook, the decline is either reflected in — or directly facilitated by — two specific features: People You May Know and the News Feed. It’s my belief that these products, piloted by deeply insidious people led by Mark Zuckerberg himself, are central to the darkness inside this company.

And I want to dispel a notion that many people have: Facebook has, from the very beginning, been a rotten, manipulative company, one that acted with little regard for users, even at times when it pretended to do so. The goal has always been more — not just more money, but more engagement, more time on the app, more visits to the website, and perpetual growth, all mandated by Mark Zuckerberg and a rogue’s gallery of rot economists.

We start our story in 2004, a few months after the company was founded, when Peter Thiel became Facebook’s first investor, putting $500,000 into the company in return for a 10.2% stake. As depicted in the 2010 Aaron Sorkin film The Social Network, Thiel was introduced to Zuckerberg by Sean Parker, the co-founder of Napster, his wounds fresh from a brutal beating by Lars Ulrich and Metallica. 

What was significant about this event wasn’t the people involved or the amount of money invested, but the terms of the deal — terms that would permanently and inextricably ruin Facebook. Parker, ever the advocate for founders, negotiated with Peter Thiel to allow Mark Zuckerberg to retain two of Facebook’s five board seats. When Parker resigned in 2005, he insisted that Zuckerberg be given his seat

While this may have seemed a noble, even ideal situation for a founder in these early days, this single move has allowed Zuckerberg to wield complete power over Facebook, and was the first step along a road that has made him impossible to fire, ultimately dooming the company to whatever miserable death march Harvard’s most quirked-up white boy would deem fit.

As an aside: Zuckerberg’s invulnerability also comes from Meta’s dual-class share structure. Essentially, there are two kinds of Meta stock: Class A shares, which are widely traded and represent one vote, and Class B shares, which each represent ten votes. 

Zuckerberg owns an estimated 60% of all Class B shares, with a significant chunk of the remainder largely held by insiders and other sycophants. This means that despite only owning roughly 13% of Facebook shares, his power is virtually absolute. While there are relatively fewer Class B shares than Class A shares, it doesn’t really matter, given the disproportionate power they wield. In essence, this makes Zuckerberg invulnerable to a stockholder revolt, or the arrival of an activist investor (like, for example, Bill Ackman). 

This isn’t a particularly unusual structure. It’s common within technology companies, particularly those founded over the past few decades. Google has a three-tier stock structure, with Class B shares (those with the most voting power) held by Larry Page, Sergey Brin, and former CEO Eric Schmidt, in essence giving them a majority despite collectively only owning a relatively small percentage of the company. Class C shares, which Facebook has tried (and failed) to issue, carry no voting rights.

I cannot express how important this moment was. This was when the mistake was made, albeit innocently, that kept Zuckerberg in power. I would be kind and suggest that Sean Parker didn’t know what Mark Zuckerberg was like at this point other than the fact that he absolutely did, and he should have known better. 

A few years later in April 2006, Zuckerberg was once again able to negotiate terms that kept him at the top, in part thanks to Facebook’s strong revenue — $6 million a year, an impressive sum at the time, especially for company in the new and ill-defined social networking space — and the fact that he owned three out of five board seats and could do whatever the hell he wanted. Venture capitalists desperate to avoid missing out on a company with rocketship growth had little leverage against a man with the keys to the most important company in tech. 

In September 2006, Facebook would launch the News Feed — a relatively unsophisticated chronological feed of your friends’ status that is, on some level, one of the most important launches in modern software history, and a great example of a good product that got destroyed by executives that realized a focal point for their users is a great place to torture them with advertising. 

Don’t get me wrong. It had a rough start. The feed, which Facebook product manager Ruchi Sanghvi described as “quite unlike anything you can find on the web,” led to an immediate revolt within Facebook’s 9.5 million users, as it effectively showed a stream of literally every action you took on the platform, which users found “creepy” and “stalker-esque” according to a group made at the time called Students Against Facebook News Feed. 

For those who weren’t around when it launched, or didn’t use Facebook, this bit is hard to grasp. It would show you literally every activity you and your friends did. If you joined or left a group, or made a new connection, it would appear for all to see. It was almost like a panopticon — one where you are simultaneously the jailer and the prisoner. 

A few days later, Zuckerberg would apologize, saying that Facebook “really messed this one up,” and that Facebook spent two days coding non-stop to add a privacy page that allowed users to choose which types of stories went into their News Feed. While this feature was actually released, it’s important to know that while the rest of the world might not have known what you were doing afterwards, Mark Zuckerberg and Facebook absolutely did.

This was the same year that Facebook opened to the general public, turned down a billion-dollar acquisition offer from Yahoo, and signed a three-year advertising deal with Microsoft. 

Yet Facebook’s rot began in earnest in 2008, when a former McKinsey analyst, now the Vice President of Global Online Sales & Operations at Google, would be made Chief Operating Officer of Facebook. Her name was Sheryl Sandberg, and she would be the force that would grow Facebook into a truly evil company.


According to an excerpt from Steven Levy’s “Facebook: The Inside Story,” sometime in 2008, Facebook’s growth had stalled somewhere around 90 million users, with Zuckerberg telling Levy in an interview that the company had “hit a wall.” 

Chamath Palihapitiya, a former venture capitalist who was at the time Facebook’s VP of Platform & Monetization, came to Zuckerberg with an idea — that Facebook should focus on a new metric, monthly active users. This metric, which is now a cornerstone of most tech companies’ growth metrics, would be a measure of whether users were sticking on the platform, and boosting this number would mean that users were engaged with the product. 

In a meeting with Zuckerberg and Sandberg, Palihapitiya was asked to name it, and it came up with MAUs. Sandberg responded that they should “just call it growth.” 

At the next board meeting, Palihapitiya would present what Levy would call “aggressive growth techniques” that would double or triple Facebook’s user base, and then “use the platform itself as an engine for growth.” 

After a tepid response at the meeting, Palihapitiya was given license to build a new growth team from both within and outside Facebook, one that would eventually become a rogue’s gallery of rot economists, including Facebook Head of Product Naomi Gleit, and Alex Schultz, who would go on to become Meta’s Chief marketing Officer and VP of Analytics, along with fellow teammate Javier Olivan, who replaced Sandberg as Chief Operating Officer in 2022.

Gleit, Olivan and Schultz are at the epicenter of almost every single choice that Meta has made to put growth above the user experience. Palihapitiya wanted their team — which also included early Facebook data scientist Danny Ferrante and Blake Ross, who previously co-created the Firefox Web Browser — to become “the Growth Circle,” what Levy refers to as “a power center in the company with special status and a distinctive subculture.” 

He succeeded. In 2008, Facebook would launch a feature called “People You May Know,” a seemingly-innocent feature that would, as the name suggests, suggest people that you might know on Facebook. Yet this innocent seemed to be a little too good at its job, with reporter Kashmir Hill — who spent over a year investigating the feature for Gizmodo from 2016 to 2017 —  saying it “mined information users don’t have control over to make connections they may not want to make,” such as suggesting patients friend their psychiatrist, or outing a sex worker’s real identity to her clients.

Despite doggedly researching People You May Know, Hill never got Facebook to explain how it worked. Nevertheless, Steven Levy was able to get Palihapitiya to reveal one horrifying detail — that Facebook’s growth team would take out google ads on people’s names, targeting those who hadn’t joined Facebook with “dark profiles,” fake links that would suggest that somebody had already taken their name. 

While many might consider People You May Know a well-meaning feature, the algorithm behind the service is extremely powerful and sinister, capable of dragging up distant relationships, honed by both the data you knowingly give Facebook (the connections you’ve made, for example), and other unknown sources.

I do, however, believe there’s another person responsible — a man named Lars Backstrom, a relatively-unknown yet powerful figure in Facebook’s history. 

Backstrom’s LinkedIn notes that he “built PYMK (People You May Know)’s backend infrastructure and machine learning system” starting from September 2009 through February 2012. In 2013, Backstrom published a paper in tandem with computer scientist Jon Kleinberg called “Romantic Partnerships and the Dispersion of Social Ties: A Network Analysis of Relationship Status on Facebook.” The paper focuses on an algorithm that was able to independently identify someone’s spouse, according to the Times’ Steven Lohr, 60 percent of the time, and even able to calculate when a couple might break up. 

The paper hinged on the idea that mutual friends isn’t an indicator of a couple’s relationship status, but rather the “dispersion” of those mutual friends — the amount of mutual friends that are also mutual friends with each other. This is a seemingly sensible idea that, when framed as an algorithm made by the engineer who made Facebook’s extremely successful growth tool, feels far more creepy.

Backstrom, in a 2010 talk relayed by Steven Levy, said that People You May Know “accounted for a significant chunk of all friending on Facebook,” and that friends-of-friends are the most powerful part of the tool. People You May Know’s power wasn’t just that it found people that you knew well, but people you sort-of knew, offering you the tantalizing idea of getting close to them, and when you friended them, Facebook’s News Feed would make their content more prevalent in your feed, forcing an intimacy that may not have existed by making a new connection — no matter how tangential it may be — feel more immediate in your life. 

This subtle feature is responsible for much of Facebook’s growth, and was deliberately engineered by rot economists like Schultz, Gleit and Olivan to do so at any cost, even if it endangered the lives of children. 

According to the Wall Street Journal, in 2018 an engineer in charge of one of Facebook’s community integrity teams, David Erb, found that “the most frequent way adults found children to prey upon” was People You May Know. The Journal also reported that a few days later, Erb found that Meta was planning to add encryption to Facebook messages, something that would prevent the company from fixing the problem, threatening to resign in protest. He was placed on leave not long after, eventually leaving. 

Facebook introduced encryption to its messages, despite Erb’s stark warning that millions of pedophiles were targeting tens of millions of children. People You May Know was — and is — a dangerous tool, created, maintained and proliferated by people that now effectively run Meta. 

Once Palihapitiya left Facebook in 2011 to start a venture capital firm, his team — Schultz, Gleit, and Olivan — continued to accumulate power. I believe these people, along with Lars Backstrom, are responsible for making People You May Know into a reckless, dangerous and scurrilous growth machine for the company. 

In January 2012, Facebook’s true rot would set in, with the launch of sponsored stories in Facebook’s News Feed, which Josh Constine, in a moment that should bring him deep shame, claimed he’d “rather see [Facebook]…inform [him] about the activity of friends than traditional ads that can be much less relevant,” a statement that frames exactly how little the tech media criticized this company. He also added one interesting tidbit — that Facebook had tested letting advertisers pay for sponsored content in the News Feed in 2006, but discontinued doing so in 2008, deciding that advertisers shouldn’t be able to show content in the News Feed unless it could appear there naturally, a statement that Constine wrote without a single hint of alarm.

In February, Lars Backstrom, the architect of People You May Know, would move over to manage News Feed ranking, and a few months later, Facebook would acquire Instagram for $1 billion dollars. In May, Facebook would have what would be considered a disastrous IPO, with Wall Street concerned about its lack of growth on mobile devices. After its first day of trading, the share price was barely above its debut, and by September, it had shed more than half its value. 

Something had to change, and sometime in 2012, Facebook would promote product manager Adam Mosseri to the head of Facebook’s News Feed division. It was during this period when it started recommending content in the News Feed. 

In Mark Zuckerberg’s statement of intent to potential investors in Facebook’s IPO, he declared that there was a “huge need and huge opportunity to get everyone in the world connected,” or what I like to know as Phase 1 of Facebook’s decline. 

Behind the curtain, Zuckerberg’s vision wasn’t to “get everyone in the world connected,” but to get everyone in the world connected to Facebook, and by the end of 2012 the site had a billion users, or a seventh of humanity.  

He also made one very bizarre statement: that Facebook didn’t build services to make money, but made money to build services. 

Nothing could have been further from the truth.


For over a decade, Facebook has deliberately made itself worse to make more money, thanks Mark Zuckerberg and the Growth Team perpetuating a culture that manipulates and tortures users to make company metrics improve. 

In a 2016 memo leaked to Buzzfeed several years ago, then-Vice President Andrew “Boz” Bosworth — now Facebook’s Chief Technology Officer — wrote a horrifying screed that romanticized Facebook’s growth-at-all-costs culture. Boz noted that connecting people on Facebook “may cost a life by exposing someone to bullies,” and that “maybe someone dies in a terrorist attack” coordinated using Facebook’s tools, and that because Facebook’s mission was to “connect people,” “all the work [Facebook does] in growth is justified.” 

Boz makes it clear that this is a moral judgment by adding that “all questionable contact importing practices, all the subtle language that helps people stay searchable friends, all the work we do to bring more communication in” was justified in the pursuit of growth. 

This man — this horrible, nasty man — is the Chief Technology Officer of Meta, and the mastermind behind both Meta’s Metaverse and Artificial Intelligence goals.

And as I’ll explain, Facebook is culturally a growth-at-all-costs company, and will make any changes it needs to make to its products to make a number go up.

In December 2020, a Facebook engineer published a document they wrote in 2019 called “When User-Engagement ≠ User-Value,” shared with me by a source that tells me they’re available through Harvard.

Caption: A section of “When User-Engagement ≠ User-Value,” written in 2019, published in December 2020.

This remarkable document, shared with Facebook’s staff, makes a stark warning that Facebook’s brutal focus on user engagement and “time spent” damages the user experience, first explaining the historical context using the analogue of Cable TV versus Broadcast TV, and how Cable TV needs to maximize overall value for their fixed-free offerings, and Broadcast TV needs to maximize engagement with their channels, something the engineer explains some networks do by having constant recaps, padding scenes and adding cliffhangers.

The engineer then explains the several ways that Facebook makes its products worse to maximize engagement. He lists how:

  • Facebook deliberately limited the amount of information in notifications to make people come to the site to increase engagement, as people had to keep checking the site to see what was happening and couldn’t rely on notifications for, well, notifying them about stuff. The engineer referred to this as a “clear value-engagement tradeoff.”
  • Facebook deliberately stopped sending out emails telling people what happened on their feed so that they’d have to check the site.
  • Facebook, by maximizing for time spent on the site, incentivizes “bad ranking,” I imagine because when you’re incentivizing to keep people on the site, you’re not actually trying to provide a service.
  • Headlines on the Facebook feed say misleading things and “subtly exaggerate” to get people to click links “all the time,” and the solution to deal with it was “very crude.”

This piece is a dire warning to Facebook’s internal staff, and includes numerous worrying warnings about how the company keeps users engaged, layered with evidence from the news and academia.

The engineer warns:

  1. Higher Facebook use is correlated with worse psychological states.
  2. An experiment found that a 1-month break from Facebook improved self-reported well-being.
  3. A large fraction of Facebook users struggle with their Facebook/Instagram use.
  4. A significant minority of Facebook users (3.1%) report serious problems with sleep, work or relationships that they attribute to Facebook AND concerns or preoccupations with how they use Facebook.”
Caption: A section of “When User-Engagement ≠ User-Value,” written in 2019, published in December 2020.

In their suggested solutions, the engineer notes that “over the last couple of years News Feed slowly switched from maximizing time-spent towards maximizing sessions” — meaning that Facebook had now changed to focus less on how long a user was on the platform and more toward maximizing how many times someone visited the platform, which the engineer noted was “a strong predictor of problematic use.”

This document — again, written in 2019 and published in 2020 — frames both how craven Facebook was about twisting its users’ habits, and how intimately aware it was of these problems. One commenter added, in a larger thought about addiction, that they “worried that driving sessions incentivized Facebook to make [its] products more addictive, without providing much more value.”

And that's exactly what Facebook has become: a gratuitous social experiment where the customer is manipulated to match metrics handed on from on high. 

For over a decade, Facebook has knowingly and repeatedly taken steps to maximize activities on the website at the cost of the user experience, making countless tweaks to the product to increase internal metrics obsessed over by Mark Zuckerberg, Alex Schultz and Javier Olivan. 

In another document provided to me by the same source, one engineer explained in October 2020 an internal channel why the Facebook App Took on sessions as a “Top-Line Metric” for 2019. The piece centered around Facebook’s app strategy of building “socially powered services” that were focused on the “possibilities of what people could do” with Facebook, largely because Facebook had no easy way to measure how it “satisfied user needs.” 

This document is important, because it reveals some very specific details about the company — such as that in 2014, Mark Zuckerberg unilaterally decided that time spent on Facebook was a “top level company Engagement goal,” and forced it upon the News Feed team despite their protests that it was “too easy to game.”

To be clear, when someone at Facebook says something is “too easy to game,” it means that they are warning that people building products inside Facebook will try and game the system by making it worse to hit metrics — such as, in the case of Time Spent on the app, making the product more convoluted to use, meaning that people use the app more because it takes more effort to do a given thing.

Hey, this reminds me of something. When Prabhakar Raghavan led a coup to take over Google Search from Ben Gomes, his specific obsession was with “queries,” or how many searches people made on Google. Mark Zuckerberg effectively did the same thing, demanding in 2014 that Facebook increase two metrics — Daily Active People and Time Spent on Facebook — perpetually increase 10% year-over-year. It didn’t matter if these things made the product worse, or didn’t actually show their users were happy. All that mattered was if the number went up.
Caption: A section of “Why FB App Took on Sessions as a Top-Line Metric for 2019,” posted October 2020, where “Zuck…declared the goal was a perpetual 10%T y/y growth in Family TS (Time Spent)/DAP (Daily Active People).” 

In 2017, the document notes that engagement metrics started to dive, but the company’s focus on “Time Spent” meant that nobody noticed because the number that Mark Zuckerberg cared about went up, until the alarm was sound and Facebook moved to “Sessions.”

Caption: A section of “Why FB App Took on Sessions as a Top-Line Metric for 2019,” posted October 2020, discussing how Facebook “was no longer focused on your friends,” leading them to pivoting to “meaningful interactions.” 

In this document, they discuss the term “meaningful interactions,” the underlying metric which (allegedly) guides Facebook today. In January 2018, Adam Mosseri, then Head of News Feed, would post that an update to the News Feed would now “prioritize posts that spark conversations and meaningful interactions between people,” which may explain the chaos (and rot) in the News Feed thereafter.

To be clear, metrics around time spent hung around at the company, especially with regard to video, and Facebook has repeatedly and intentionally made changes to manipulate its users to satisfy them. In his book “Broken Code,” Jeff Horwitz notes that Facebook “changed its News Feed design to encourage people to click on the reshare button or follow a page when they viewed a post,” with “engineers altering the Facebook algorithm to increase how often users saw content reshared from people they didn’t know.” 

Horwitz also notes that Facebook began hunting for “friction…anything that was slowing users down or limiting their activity—with the goal of eliminating it,” something that manifested in Facebook allowing users to create an unlimited number of pages and cross-post the same material to multiple groups at once, and even a change to People You May Know that prioritized recommending accounts that were likely to accept a friend request. 

Naturally, this led to negative incentives. In an internal document from 2018 called “The Friending 1%: Inequality in Friending,” an unnamed internal Facebook worker noted that Facebook’s aggressive optimization to make people send friend requests had created a situation where 0.5% percent of Facebook accounts were responsible for 50 percent of friend requests. 

Worse still, accounts that were less than 15-days-old now made up 20 percent of all outgoing friend requests, and more than half of friend requests were sent by somebody who was making more than 50 of them a day, heavily suggesting that Facebook was growing its platform’s “connections” through spam. 

Caption: An excerpt of “The Friending 1%: Inequality in Friending, posted November 2018.

In a later document published in mid-2019, another engineer proposes limiting the amount of invites that Facebook users can send to groups, specifically noting that “a larger proportion of invites into bad groups come from a small subset” of “whale inviters” that sent massive amounts of invites in a small amount of time, specifically noting this effect in vaccine misinformation groups, which posed an “integrity and product relevance risk” to the company.

A few years ago, I'm told that Zuckerberg begrudgingly established some limits on invites, but it’s unclear what those limits are. 


For Facebook — and Meta at large — anything that might limit growth is a non-starter. Sometime in 2020, another document that is apparently available through Harvard proposes limiting the distribution (as in the recommendation) of groups before they’re proven to be trustworthy. The problem, as the unnamed engineer describes, is “that “unproven” movements” were able to get traction on Facebook way before community standards could review them, some gaining over a million followers in a single day, and that the groups began to violate community standards way after they gained traction. As you’ve likely guessed, no such limitation was ever implemented on Facebook. 

In Broken Code, Horwitz notes that “even when the safety benefits were high and the business risks were low…” Facebook would choose not to use its emergency “break the glass” playbooks to take action, and even when it did, it was quick to roll them back. 

Horwitz gives an example of the runup to Myanmar’s 2020 election, where the company rolled out Break The Glass measures, “[limiting] the spread of reshared content and [replacing] it with more content from users’ friends.” The countermeasures “produced an impressive 25% reduction in viral inflammatory posts and a 49% reduction in viral hoax photos” at a meager two percent loss in “meaningful social interactions,” Facebook’s favorite metric. Horwitz notes that despite this small change in engagement, an internal team chose to roll these changes back a few months later.

One particularly gruesome story is that of Project Daisy, a pilot program that would remove likes from Instagram to reduce the anxiety and negative feelings that teenagers felt using the app that was retired after Adam Mosseri, now head of Instagram, claimed it had “very little impact and the result was neutral,” relegating it to an opt-in feature. Yet a complaint filed by several US states including California, New York, North Carolina and Pennsylvania in May 2023 quoted a meta researcher saying that Project Daisy was “one of the clearest things (supported by research that [Meta could] do to positively impact social comparison and well being on IG and we should ship it,” with another researcher saying that “Daisy is such a rare case where a product intervention can improve well-being for almost everyone that uses our products.” Chillingly, one Meta employee said that if Meta refused to implement Daisy despite the research, they were “doubtful” that Meta would implement “any broad product changes with the purpose of improving user well-being,” which I think we can all agree is accurate. 

The suit is labyrinthine, and specifically makes one allegation that aligns with the current state of both Facebook and Instagram — that “Meta’s algorithm alters users’ experience on the Platform and draws unwitting users into rabbit holes of algorithmically curated material.” 

And as I’ve mentioned before, the people involved from the very beginning are those perpetuating the same outright abuse of their users. An email thread from late 2017 and early 2018 cited on page 54 of the aforementioned complaint between Adam Mosseri and other executives discussed “significant declines in U.S. engagement metrics,” naming how reducing notifications was associated with reduced engagement, with an unnamed employee stating that there would be a tradeoff between making a better notification experience for users and recovering flailing numbers in Facebook’s “Daily Active People” metric. 

In the same thread, Chief Product Officer Chris Cox said that if the team believed that a filtered notification experience was better, they shouldn’t make changes because a metric was down, adding that Meta needed to “get better at making the harder decisions” when the main decision criteria was experience rather than one particular metric. The then-VP of Analytics and now Chief Marketing Officer responded that he “fundamentally believed that [Meta] abused the notifications channel as a company.” Though the suit doesn’t quote the rest of the thread, it notes that Director of Growth Andrew Bocking ended the discussion by saying that Meta would “prioritize engagement over reducing notifications,” and that he “just got clear input from Naomi [Gleit] that US DAP [Daily Active People] is a bigger concern for [Mark Zuckerberg] right now than user experience.” 

When I wrote about Prabhakar Raghavan’s destruction of Google Search, it was much harder to find real, tangible proof of his actions — though one could easily chart a path of intent in how he wanted to increase queries and revenue in Google Search at any cost. In Meta’s case, things are much easier. 

Mark Zuckerberg is personally responsible for the state of Facebook and Instagram today, and has assembled a cadre of growth fiends that will — at times happily and other times begrudgingly — make the user experience worse to increase engagement. And this isn’t a new phenomenon. From the very early days of Facebook, Mark Zuckerberg has acted without remorse, tricking and scheming and screwing others over in pursuit of digital dominance and financial gain in a way that I find absolutely stomach churning. 

Yet Zuckerberg could not perpetuate these disgusting acts without the help of people like Chief Marketing Officer Alex Schultz, who saw to it that Meta shut down CrowdTangle, a public insights tool from Meta that allowed researchers to easily analyze what was happening on Facebook. Horwitz reports in Broken Code that Facebook — led by Alex Schultz — killed CrowdTangle because reporter Kevin Roose kept posting a list of Facebook’s most-engaged-with content, and that Facebook was dominated with right wing lunacy and misinformation like “Plandemic,” a COVID conspiracy film that Joel Kaplan, head of Meta’s public policy team, initially blocked the health team from removing until Roose reported that it was Facebook’s number one post.

Just to be abundantly clear, the Head of Public Policy at Facebook deliberately allowed the spread of COVID conspiracies, and would have continued to do so if a reporter hadn’t used a Facebook tool to show how popular they were, which resulted in Facebook choosing to kill the tool that allowed the reporter to find out.

Every single terrible thing you see on Facebook — be it some sort of horrible right wing nonsense or a confusing and annoying product decision — is made in pursuit of growth. Every bit of damage that Meta has caused to the world has been either an act of ignorance or deliberate harm, at many times tweaking the product to make it harder or more annoying to use so that you will log onto Facebook or Instagram multiple times a day and spend as much time on there as possible.

This should explain why both Instagram and Facebook are so utterly abysmal to use. Meta’s company culture is one of sycophancy and user abuse, and both of these apps are deliberately engineered to get in the way of what you want to do as a means of increasing your engagement, even if said engagement is won because the thing you’re engaging with is actively fighting you.

And the rot begins at the top, with these horrifying choices perpetuated by Mark Zuckerberg, with his enforcer Naomi Gleit making demands of engineers desperate to appease the almighty Zuck, having to (to quote Horwitz) “be careful to not frame decisions in terms of right and wrong.” 

This is the petty king of the tech industry — a multi-billionaire that can never be fired, that has billions of people on websites that he has deliberately engineered to boost engagement rather than provide a service. 

I won’t mince words. Mark Zuckerberg is a scumbag that hurts his users for profit. The entire tech industry should know who he is, the culture he has created, and the ways in which he has brutalized his company into finding ways to increase numbers at the cost of his users’ happiness.

As I’ll discuss in the next newsletter, this shameless and fundamentally anti-social approach is ultimately self-destructive, and Facebook’s dominant position is looking increasingly tenuous. Next week, I’ll show you how his disgusting attitude toward running a company is tearing Facebook and Instagram to the ground, and how Meta is trying to obfuscate its decline with Enron-style metric-cooking tactics.


Thanks for reading Where's Your Ed At!

If you enjoyed this newsletter, please listen to my podcast Better Offline, the Better Offline Reddit, or join me on the WYEA Discord chat.

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.