John Lanchester
- BuyThe Attention Merchants: From the Daily Newspaper to Social Media, How Our Time and Attention Is Harvested and Sold by Tim Wu
Atlantic, 416 pp, £20.00, January, ISBN 978 1 78239 482 2
- BuyChaos Monkeys: Inside the Silicon Valley Money Machine by Antonio GarcÃa MartÃnez
Ebury, 528 pp, £8.99, June, ISBN 978 1 78503 455 8
- BuyMove Fast and Break Things: How Facebook, Google and Amazon have Cornered Culture and What It Means for All of Us by Jonathan Taplin
Macmillan, 320 pp, £18.99, May, ISBN 978 1 5098 4769 3
At the end of
June, Mark Zuckerberg announced that Facebook had hit a new level: two
billion monthly active users. That number, the company’s preferred
‘metric’ when measuring its own size, means two billion different people
used Facebook in the preceding month. It is hard to grasp just how
extraordinary that is. Bear in mind that thefacebook – its original name
– was launched exclusively for Harvard students in 2004. No human
enterprise, no new technology or utility or service, has ever been
adopted so widely so quickly. The speed of uptake far exceeds that of
the internet itself, let alone ancient technologies such as television
or cinema or radio.
Also amazing: as Facebook has grown, its users’ reliance on it has
also grown. The increase in numbers is not, as one might expect,
accompanied by a lower level of engagement. More does not mean worse –
or worse, at least, from Facebook’s point of view. On the contrary. In
the far distant days of October 2012, when Facebook hit one billion
users, 55 per cent of them were using it every day. At two billion, 66
per cent are. Its user base is growing at 18 per cent a year – which
you’d have thought impossible for a business already so enormous.
Facebook’s biggest rival for logged-in users is YouTube, owned by its
deadly rival Alphabet (the company formerly known as Google), in second
place with 1.5 billion monthly users. Three of the next four biggest
apps, or services, or whatever one wants to call them, are WhatsApp,
Messenger and Instagram, with 1.2 billion, 1.2 billion, and 700 million
users respectively (the Chinese app WeChat is the other one, with 889
million). Those three entities have something in common: they are all
owned by Facebook. No wonder the company is the fifth most valuable in
the world, with a market capitalisation of $445 billion.
Zuckerberg’s
news about Facebook’s size came with an announcement which may or may
not prove to be significant. He said that the company was changing its
‘mission statement’, its version of the canting pieties beloved of
corporate America. Facebook’s mission used to be ‘making the world more
open and connected’. A non-Facebooker reading that is likely to ask:
why? Connection is presented as an end in itself, an inherently and
automatically good thing. Is it, though? Flaubert was sceptical about
trains because he thought (in Julian Barnes’s paraphrase) that ‘the
railway would merely permit more people to move about, meet and be
stupid.’ You don’t have to be as misanthropic as Flaubert to wonder if
something similar isn’t true about connecting people on Facebook. For
instance, Facebook is generally agreed to have played a big, perhaps
even a crucial, role in the election of Donald Trump. The benefit to
humanity is not clear. This thought, or something like it, seems to have
occurred to Zuckerberg, because the new mission statement spells out a
reason for all this connectedness. It says that the new mission is to
‘give people the power to build community and bring the world closer
together’.
Hmm. Alphabet’s mission statement, ‘to organise the
world’s information and make it universally accessible and useful’, came
accompanied by the maxim ‘Don’t be evil,’ which has been the source of a
lot of ridicule: Steve Jobs called it ‘bullshit’.
1
Which it is, but it isn’t only bullshit. Plenty of companies, indeed
entire industries, base their business model on being evil. The
insurance business, for instance, depends on the fact that insurers
charge customers more than their insurance is worth; that’s fair enough,
since if they didn’t do that they wouldn’t be viable as businesses.
What isn’t fair is the panoply of cynical techniques that many insurers
use to avoid, as far as possible, paying out when the insured-against
event happens. Just ask anyone who has had a property suffer a major
mishap. It’s worth saying ‘Don’t be evil,’ because lots of businesses
are. This is especially an issue in the world of the internet. Internet
companies are working in a field that is poorly understood (if
understood at all) by customers and regulators. The stuff they’re doing,
if they’re any good at all, is by definition new. In that overlapping
area of novelty and ignorance and unregulation, it’s well worth
reminding employees not to be evil, because if the company succeeds and
grows, plenty of chances to be evil are going to come along.
Google
and Facebook have both been walking this line from the beginning. Their
styles of doing so are different. An internet entrepreneur I know has
had dealings with both companies. ‘YouTube knows they have lots of dirty
things going on and are keen to try and do some good to alleviate it,’
he told me. I asked what he meant by ‘dirty’. ‘Terrorist and extremist
content, stolen content, copyright violations. That kind of thing. But
Google in my experience knows that there are ambiguities, moral doubts,
around some of what they do, and at least they try to think about it.
Facebook just doesn’t care. When you’re in a room with them you can
tell. They’re’ – he took a moment to find the right word – ‘scuzzy’.
That
might sound harsh. There have, however, been ethical problems and
ambiguities about Facebook since the moment of its creation, a fact we
know because its creator was live-blogging at the time. The scene is as
it was recounted in Aaron Sorkin’s movie about the birth of Facebook,
The Social Network.
While in his first year at Harvard, Zuckerberg suffered a romantic
rebuff. Who wouldn’t respond to this by creating a website where
undergraduates’ pictures are placed side by side so that users of the
site can vote for the one they find more attractive? (The film makes it
look as if it was only female undergraduates: in real life it was both.)
The site was called Facemash. In the great man’s own words, at the
time:
I’m a little intoxicated, I’m
not gonna lie. So what if it’s not even 10 p.m. and it’s a Tuesday
night? What? The Kirkland dormitory facebook is open on my desktop and
some of these people have pretty horrendous facebook pics. I almost want
to put some of these faces next to pictures of some farm animals and
have people vote on which is the more attractive … Let the hacking
begin.
As Tim Wu explains in his energetic and original new book
The Attention Merchants,
a ‘facebook’ in the sense Zuckerberg uses it here ‘traditionally
referred to a physical booklet produced at American universities to
promote socialisation in the way that “Hi, My Name Is” stickers do at
events; the pages consisted of rows upon rows of head shots with the
corresponding name’. Harvard was already working on an electronic
version of its various dormitory facebooks. The leading social network,
Friendster, already had three million users. The idea of putting these
two things together was not entirely novel, but as Zuckerberg said at
the time, ‘I think it’s kind of silly that it would take the University a
couple of years to get around to it. I can do it better than they can,
and I can do it in a week.’
Wu argues that capturing and reselling attention has been the
basic model for a large number of modern businesses, from posters in
late 19th-century Paris, through the invention of mass-market newspapers
that made their money not through circulation but through ad sales, to
the modern industries of advertising and ad-funded TV. Facebook is in a
long line of such enterprises, though it might be the purest ever
example of a company whose business is the capture and sale of
attention. Very little new thinking was involved in its creation. As Wu
observes, Facebook is ‘a business with an exceedingly low ratio of
invention to success’. What Zuckerberg had instead of originality was
the ability to get things done and to see the big issues clearly. The
crucial thing with internet start-ups is the ability to execute plans
and to adapt to changing circumstances. It’s Zuck’s skill at doing that –
at hiring talented engineers, and at navigating the big-picture trends
in his industry – that has taken his company to where it is today. Those
two huge sister companies under Facebook’s giant wing, Instagram and
WhatsApp, were bought for $1 billion and $19 billion respectively, at a
point when they had no revenue. No banker or analyst or sage could have
told Zuckerberg what those acquisitions were worth; nobody knew better
than he did. He could see where things were going and help make them go
there. That talent turned out to be worth several hundred billion
dollars.
Jesse Eisenberg’s brilliant portrait of Zuckerberg in
The Social Network is misleading, as Antonio GarcÃa MartÃnez, a former Facebook manager, argues in
Chaos Monkeys,
his entertainingly caustic book about his time at the company. The
movie Zuckerberg is a highly credible character, a computer genius
located somewhere on the autistic spectrum with minimal to non-existent
social skills. But that’s not what the man is really like. In real life,
Zuckerberg was studying for a degree with a double concentration in
computer science and – this is the part people tend to forget –
psychology. People on the spectrum have a limited sense of how other
people’s minds work; autists, it has been said, lack a ‘theory of mind’.
Zuckerberg, not so much. He is very well aware of how people’s minds
work and in particular of the social dynamics of popularity and status.
The initial launch of Facebook was limited to people with a Harvard
email address; the intention was to make access to the site seem
exclusive and aspirational. (And also to control site traffic so that
the servers never went down. Psychology and computer science, hand in
hand.) Then it was extended to other elite campuses in the US. When it
launched in the UK, it was limited to Oxbridge and the LSE. The idea was
that people wanted to look at what other people like them were doing,
to see their social networks, to compare, to boast and show off, to give
full rein to every moment of longing and envy, to keep their noses
pressed against the sweet-shop window of others’ lives.
This focus
attracted the attention of Facebook’s first external investor, the now
notorious Silicon Valley billionaire Peter Thiel. Again,
The Social Network
gets it right: Thiel’s $500,000 investment in 2004 was crucial to the
success of the company. But there was a particular reason Facebook
caught Thiel’s eye, rooted in a byway of intellectual history. In the
course of his studies at Stanford – he majored in philosophy – Thiel
became interested in the ideas of the US-based French philosopher René
Girard, as advocated in his most influential book,
Things Hidden since the Foundation of the World.
Girard’s big idea was something he called ‘mimetic desire’. Human
beings are born with a need for food and shelter. Once these fundamental
necessities of life have been acquired, we look around us at what other
people are doing, and wanting, and we copy them. In Thiel’s summary,
the idea is ‘that imitation is at the root of all behaviour’.
Girard
was a Christian, and his view of human nature is that it is fallen. We
don’t know what we want or who we are; we don’t really have values and
beliefs of our own; what we have instead is an instinct to copy and
compare. We are homo mimeticus. ‘Man is the creature who does not know
what to desire, and who turns to others in order to make up his mind. We
desire what others desire because we imitate their desires.’ Look
around, ye petty, and compare. The reason Thiel latched onto Facebook
with such alacrity was that he saw in it for the first time a business
that was Girardian to its core: built on people’s deep need to copy.
‘Facebook first spread by word of mouth, and it’s about word of mouth,
so it’s doubly mimetic,’ Thiel said. ‘Social media proved to be more
important than it looked, because it’s about our natures.’ We are keen
to be seen as we want to be seen, and Facebook is the most popular tool
humanity has ever had with which to do that.
*
The view
of human nature implied by these ideas is pretty dark. If all people
want to do is go and look at other people so that they can compare
themselves to them and copy what they want – if that is the final,
deepest truth about humanity and its motivations – then Facebook doesn’t
really have to take too much trouble over humanity’s welfare, since all
the bad things that happen to us are things we are doing to ourselves.
For all the corporate uplift of its mission statement, Facebook is a
company whose essential premise is misanthropic. It is perhaps for that
reason that Facebook, more than any other company of its size, has a
thread of malignity running through its story. The high-profile, tabloid
version of this has come in the form of incidents such as the
live-streaming of rapes, suicides, murders and cop-killings. But this is
one of the areas where Facebook seems to me relatively blameless.
People live-stream these terrible things over the site because it has
the biggest audience; if Snapchat or Periscope were bigger, they’d be
doing it there instead.
In many other areas, however, the site is
far from blameless. The highest-profile recent criticisms of the company
stem from its role in Trump’s election. There are two components to
this, one of them implicit in the nature of the site, which has an
inherent tendency to fragment and atomise its users into like-minded
groups. The mission to ‘connect’ turns out to mean, in practice, connect
with people who agree with you. We can’t prove just how dangerous these
‘filter bubbles’ are to our societies, but it seems clear that they are
having a severe impact on our increasingly fragmented polity. Our
conception of ‘we’ is becoming narrower.
This fragmentation
created the conditions for the second strand of Facebook’s culpability
in the Anglo-American political disasters of the last year. The
portmanteau terms for these developments are ‘fake news’ and
‘post-truth’, and they were made possible by the retreat from a general
agora of public debate into separate ideological bunkers. In the open
air, fake news can be debated and exposed; on Facebook, if you aren’t a
member of the community being served the lies, you’re quite likely never
to know that they are in circulation. It’s crucial to this that
Facebook has no financial interest in telling the truth. No company
better exemplifies the internet-age dictum that if the product is free,
you are the product. Facebook’s customers aren’t the people who are on
the site: its customers are the advertisers who use its network and who
relish its ability to direct ads to receptive audiences. Why would
Facebook care if the news streaming over the site is fake? Its interest
is in the targeting, not in the content. This is probably one reason for
the change in the company’s mission statement. If your only interest is
in connecting people, why would you care about falsehoods? They might
even be better than the truth, since they are quicker to identify the
like-minded. The newfound ambition to ‘build communities’ makes it seem
as if the company is taking more of an interest in the consequence of
the connections it fosters.
Fake news is not, as Facebook has acknowledged, the only way it
was used to influence the outcome of the 2016 presidential election. On
6 January 2017 the director of national intelligence published a report
saying that the Russians had waged an internet disinformation campaign
to damage Hillary Clinton and help Trump. ‘Moscow’s influence campaign
followed a Russian messaging strategy that blends covert intelligence
operations – such as cyber-activity – with overt efforts by Russian
government agencies, state-funded media, third-party intermediaries, and
paid social media users or “trolls”,’ the report said. At the end of
April, Facebook got around to admitting this (by then) fairly obvious
truth, in an interesting paper published by its internal security
division. ‘Fake news’, they argue, is an unhelpful, catch-all term
because misinformation is in fact spread in a variety of ways:
Information (or Influence) Operations – Actions taken by governments or organised non-state actors to distort domestic or foreign political sentiment.
False News
– News articles that purport to be factual, but which contain
intentional misstatements of fact with the intention to arouse passions,
attract viewership, or deceive.
False Amplifiers
– Co-ordinated activity by inauthentic accounts with the intent of
manipulating political discussion (e.g. by discouraging specific parties
from participating in discussion, or amplifying sensationalistic voices
over others).
Disinformation – Inaccurate or
manipulated information/content that is spread intentionally. This can
include false news, or it can involve more subtle methods, such as false
flag operations, feeding inaccurate quotes or stories to innocent
intermediaries, or knowingly amplifying biased or misleading
information.
The company is
promising to treat this problem or set of problems as seriously as it
treats such other problems as malware, account hacking and spam. We’ll
see. One man’s fake news is another’s truth-telling, and Facebook works
hard at avoiding responsibility for the content on its site – except for
sexual content, about which it is super-stringent. Nary a nipple on
show. It’s a bizarre set of priorities, which only makes sense in an
American context, where any whiff of explicit sexuality would
immediately give the site a reputation for unwholesomeness. Photos of
breastfeeding women are banned and rapidly get taken down. Lies and
propaganda are fine.
The key to understanding this is to think
about what advertisers want: they don’t want to appear next to pictures
of breasts because it might damage their brands, but they don’t mind
appearing alongside lies because the lies might be helping them find the
consumers they’re trying to target. In
Move Fast and Break Things,
his polemic against the ‘digital-age robber barons’, Jonathan Taplin
points to an analysis on Buzzfeed: ‘In the final three months of the US
presidential campaign, the top-performing fake election news stories on
Facebook generated more engagement than the top stories from major news
outlets such as the
New York Times,
Washington Post,
Huffington Post, NBC News and others.’ This doesn’t sound like a problem Facebook will be in any hurry to fix.
The
fact is that fraudulent content, and stolen content, are rife on
Facebook, and the company doesn’t really mind, because it isn’t in its
interest to mind. Much of the video content on the site is stolen from
the people who created it. An illuminating YouTube video from
Kurzgesagt, a German outfit that makes high-quality short explanatory
films, notes that in 2015, 725 of Facebook’s top one thousand most
viewed videos were stolen. This is another area where Facebook’s
interests contradict society’s. We may collectively have an interest in
sustaining creative and imaginative work in many different forms and on
many platforms. Facebook doesn’t. It has two priorities, as MartÃnez
explains in
Chaos Monkeys: growth and monetisation. It simply
doesn’t care where the content comes from. It is only now starting to
care about the perception that much of the content is fraudulent,
because if that perception were to become general, it might affect the
amount of trust and therefore the amount of time people give to the
site.
Zuckerberg himself has spoken up on this issue, in a
Facebook post addressing the question of ‘Facebook and the election’.
After a certain amount of boilerplate bullshit (‘Our goal is to give
every person a voice. We believe deeply in people’), he gets to the nub
of it. ‘Of all the content on Facebook, more than 99 per cent of what
people see is authentic. Only a very small amount is fake news and
hoaxes.’ More than one Facebook user pointed out that in their own news
feed, Zuckerberg’s post about authenticity ran next to fake news. In one
case, the fake story pretended to be from the TV sports channel ESPN.
When it was clicked on, it took users to an ad selling a diet
supplement. As the writer Doc Searls pointed out, it’s a double fraud,
‘outright lies from a forged source’, which is quite something to have
right slap next to the head of Facebook boasting about the absence of
fraud. Evan Williams, co-founder of Twitter and founder of the long-read
specialist Medium, found the same post by Zuckerberg next to a
different fake ESPN story and another piece of fake news purporting to
be from CNN, announcing that Congress had disqualified Trump from
office. When clicked-through, that turned out to be from a company
offering a 12-week programme to strengthen toes. (That’s right:
strengthen toes.) Still, we now know that Zuck believes in people.
That’s the main thing.
*
A neutral observer
might wonder if Facebook’s attitude to content creators is sustainable.
Facebook needs content, obviously, because that’s what the site
consists of: content that other people have created. It’s just that it
isn’t too keen on anyone apart from Facebook making any money from that
content. Over time, that attitude is profoundly destructive to the
creative and media industries. Access to an audience – that
unprecedented two billion people – is a wonderful thing, but Facebook
isn’t in any hurry to help you make money from it. If the content
providers all eventually go broke, well, that might not be too much of a
problem. There are, for now, lots of willing providers: anyone on
Facebook is in a sense working for Facebook, adding value to the
company. In 2014, the New York Times did the arithmetic and
found that humanity was spending 39,757 collective years on the site,
every single day. Jonathan Taplin points out that this is ‘almost
fifteen million years of free labour per year’. That was back when it
had a mere 1.23 billion users.
Taplin has worked in academia and
in the film industry. The reason he feels so strongly about these
questions is that he started out in the music business, as manager of
The Band, and was on hand to watch the business being destroyed by the
internet. What had been a $20 billion industry in 1999 was a $7 billion
industry 15 years later. He saw musicians who had made a good living
become destitute. That didn’t happen because people had stopped
listening to their music – more people than ever were listening to it –
but because music had become something people expected to be free.
YouTube is the biggest source of music in the world, playing billions of
tracks annually, but in 2015 musicians earned less from it and from its
ad-supported rivals than they earned from sales of vinyl. Not CDs and
recordings in general: vinyl.
Something similar has happened in the world of journalism.
Facebook is in essence an advertising company which is indifferent to
the content on its site except insofar as it helps to target and sell
advertisements. A version of Gresham’s law is at work, in which fake
news, which gets more clicks and is free to produce, drives out real
news, which often tells people things they don’t want to hear, and is
expensive to produce. In addition, Facebook uses an extensive set of
tricks to increase its traffic and the revenue it makes from targeting
ads, at the expense of the news-making institutions whose content it
hosts. Its news feed directs traffic at you based not on your interests,
but on how to make the maximum amount of advertising revenue from you.
In September 2016, Alan Rusbridger, the former editor of the
Guardian, told a
Financial Times
conference that Facebook had ‘sucked up $27 million’ of the newspaper’s
projected ad revenue that year. ‘They are taking all the money because
they have algorithms we don’t understand, which are a filter between
what we do and how people receive it.’
This goes to the heart of
the question of what Facebook is and what it does. For all the talk
about connecting people, building community, and believing in people,
Facebook is an advertising company. MartÃnez gives the clearest account
both of how it ended up like that, and how Facebook advertising works.
In the early years of Facebook, Zuckerberg was much more interested in
the growth side of the company than in the monetisation. That changed
when Facebook went in search of its big payday at the initial public
offering, the shining day when shares in a business first go on sale to
the general public. This is a huge turning-point for any start-up: in
the case of many tech industry workers, the hope and expectation
associated with ‘going public’ is what attracted them to their firm in
the first place, and/or what has kept them glued to their workstations.
It’s the point where the notional money of an early-days business turns
into the real cash of a public company.
MartÃnez was there at the
very moment when Zuck got everyone together to tell them they were going
public, the moment when all Facebook employees knew that they were
about to become rich:
I had chosen a
seat behind a detached pair, who on further inspection turned out to be
Chris Cox, head of FB product, and Naomi Gleit, a Harvard grad who
joined as employee number 29, and was now reputed to be the current
longest-serving employee other than Mark.
Naomi, between chats
with Cox, was clicking away on her laptop, paying little attention to
the Zuckian harangue. I peered over her shoulder at her screen. She was
scrolling down an email with a number of links, and progressively
clicking each one into existence as another tab on her browser.
Clickathon finished, she began lingering on each with an appraiser’s
eye. They were real estate listings, each for a different San Francisco
property.
MartÃnez took note of
one of the properties and looked it up later. Price: $2.4 million. He is
fascinating, and fascinatingly bitter, on the subject of class and
status differences in Silicon Valley, in particular the never publicly
discussed issue of the huge gulf between early employees in a company,
who have often been made unfathomably rich, and the wage slaves who join
the firm later in its story. ‘The protocol is not to talk about it at
all publicly.’ But, as Bonnie Brown, a masseuse at Google in the early
days, wrote in her memoir, ‘a sharp contrast developed between Googlers
working side by side. While one was looking at local movie times on
their monitor, the other was booking a flight to Belize for the weekend.
How was the conversation on Monday morning going to sound now?’
When
the time came for the IPO, Facebook needed to turn from a company with
amazing growth to one that was making amazing money. It was already
making some, thanks to its sheer size – as MartÃnez observes, ‘a billion
times any number is still a big fucking number’ – but not enough to
guarantee a truly spectacular valuation on launch. It was at this stage
that the question of how to monetise Facebook got Zuckerberg’s full
attention. It’s interesting, and to his credit, that he hadn’t put too
much focus on it before – perhaps because he isn’t particularly
interested in money per se. But he does like to win.
The solution
was to take the huge amount of information Facebook has about its
‘community’ and use it to let advertisers target ads with a specificity
never known before, in any medium. MartÃnez: ‘It can be demographic in
nature (e.g. 30-to-40-year-old females), geographic (people within five
miles of Sarasota, Florida), or even based on Facebook profile data (do
you have children; i.e. are you in the mommy segment?).’ Taplin makes
the same point:
If I want to reach
women between the ages of 25 and 30 in zip code 37206 who like country
music and drink bourbon, Facebook can do that. Moreover, Facebook can
often get friends of these women to post a ‘sponsored story’ on a
targeted consumer’s news feed, so it doesn’t feel like an ad. As
Zuckerberg said when he introduced Facebook Ads, ‘Nothing influences
people more than a recommendation from a trusted friend. A trusted
referral is the Holy Grail of advertising.’
That was
the first part of the monetisation process for Facebook, when it turned
its gigantic scale into a machine for making money. The company offered
advertisers an unprecedentedly precise tool for targeting their ads at
particular consumers. (Particular segments of voters too can be targeted
with complete precision. One instance from 2016 was an anti-Clinton ad
repeating a notorious speech she made in 1996 on the subject of
‘super-predators’. The ad was sent to African-American voters in areas
where the Republicans were trying, successfully as it turned out, to
suppress the Democrat vote. Nobody else saw the ads.)
The second
big shift around monetisation came in 2012 when internet traffic began
to switch away from desktop computers towards mobile devices. If you do
most of your online reading on a desktop, you are in a minority. The
switch was a potential disaster for all businesses which relied on
internet advertising, because people don’t much like mobile ads, and
were far less likely to click on them than on desktop ads. In other
words, although general internet traffic was increasing rapidly, because
the growth was coming from mobile, the traffic was becoming
proportionately less valuable. If the trend were to continue, every
internet business that depended on people clicking links – i.e. pretty
much all of them, but especially the giants like Google and Facebook –
would be worth much less money.
Facebook solved the problem by
means of a technique called ‘onboarding’. As MartÃnez explains it, the
best way to think about this is to consider our various kinds of name
and address.
For example, if Bed, Bath and Beyond wants to get my attention with one of its wonderful 20 per cent off coupons, it calls out:
Antonio GarcÃa MartÃnez
1 Clarence Place #13
San Francisco, CA 94107
If it wants to reach me on my mobile device, my name there is:
38400000-8cfo-11bd-b23e-10b96e40000d
That’s my quasi-immutable device ID, broadcast hundreds of times a day on mobile ad exchanges.
On my laptop, my name is this:
07J6yJPMB9juTowar.AWXGQnGPA1MCmThgb9wN4vLoUpg.BUUtWg.rg.FTN.0.AWUxZtUf
This is the content of the Facebook re-targeting cookie, which is used to target ads-are-you based on your mobile browsing.
Though
it may not be obvious, each of these keys is associated with a wealth
of our personal behaviour data: every website we’ve been to, many things
we’ve bought in physical stores, and every app we’ve used and what we
did there … The biggest thing going on in marketing right now, what is
generating tens of billions of dollars in investment and endless
scheming inside the bowels of Facebook, Google, Amazon and Apple, is how
to tie these different sets of names together, and who controls the
links. That’s it.
Facebook already had a huge amount of information about people and their social networks and their professed likes and dislikes.
2
After waking up to the importance of monetisation, they added to their
own data a huge new store of data about offline, real-world behaviour,
acquired through partnerships with big companies such as Experian, which
have been monitoring consumer purchases for decades via their
relationships with direct marketing firms, credit card companies, and
retailers. There doesn’t seem to be a one-word description of these
firms: ‘consumer credit agencies’ or something similar about sums it up.
Their reach is much broader than that makes it sound, though.
3
Experian says its data is based on more than 850 million records and
claims to have information on 49.7 million UK adults living in 25.2
million households in 1.73 million postcodes. These firms know all there
is to know about your name and address, your income and level of
education, your relationship status, plus everywhere you’ve ever paid
for anything with a card. Facebook could now put your identity together
with the unique device identifier on your phone.
That was crucial to Facebook’s new profitability. On mobiles,
people tend to prefer the internet to apps, which corral the information
they gather and don’t share it with other companies. A game app on your
phone is unlikely to know anything about you except the level you’ve
got to on that particular game. But because everyone in the world is on
Facebook, the company knows everyone’s phone identifier. It was now able
to set up an ad server delivering far better targeted mobile ads than
anyone else could manage, and it did so in a more elegant and
well-integrated form than anyone else had managed.
So Facebook
knows your phone ID and can add it to your Facebook ID. It puts that
together with the rest of your online activity: not just every site
you’ve ever visited, but every click you’ve ever made – the Facebook
button tracks every Facebook user, whether they click on it or not.
Since the Facebook button is pretty much ubiquitous on the net, this
means that Facebook sees you, everywhere. Now, thanks to its
partnerships with the old-school credit firms, Facebook knew who
everybody was, where they lived, and everything they’d ever bought with
plastic in a real-world offline shop.
4
All this information is used for a purpose which is, in the final
analysis, profoundly bathetic. It is to sell you things via online ads.
The
ads work on two models. In one of them, advertisers ask Facebook to
target consumers from a particular demographic – our thirty-something
bourbon-drinking country music fan, or our African American in
Philadelphia who was lukewarm about Hillary. But Facebook also delivers
ads via a process of online auctions, which happen in real time whenever
you click on a website. Because every website you’ve ever visited (more
or less) has planted a cookie on your web browser, when you go to a new
site, there is a real-time auction, in millionths of a second, to
decide what your eyeballs are worth and what ads should be served to
them, based on what your interests, and income level and whatnot, are
known to be. This is the reason ads have that disconcerting tendency to
follow you around, so that you look at a new telly or a pair of shoes or
a holiday destination, and they’re still turning up on every site you
visit weeks later. This was how, by chucking talent and resources at the
problem, Facebook was able to turn mobile from a potential revenue
disaster to a great hot steamy geyser of profit.
What this means
is that even more than it is in the advertising business, Facebook is in
the surveillance business. Facebook, in fact, is the biggest
surveillance-based enterprise in the history of mankind. It knows far,
far more about you than the most intrusive government has ever known
about its citizens. It’s amazing that people haven’t really understood
this about the company. I’ve spent time thinking about Facebook, and the
thing I keep coming back to is that its users don’t realise what it is
the company does. What Facebook does is watch you, and then use what it
knows about you and your behaviour to sell ads. I’m not sure there has
ever been a more complete disconnect between what a company says it does
– ‘connect’, ‘build communities’ – and the commercial reality. Note
that the company’s knowledge about its users isn’t used merely to target
ads but to shape the flow of news to them. Since there is so much
content posted on the site, the algorithms used to filter and direct
that content are the thing that determines what you see: people think
their news feed is largely to do with their friends and interests, and
it sort of is, with the crucial proviso that it is their friends and
interests as mediated by the commercial interests of Facebook. Your eyes
are directed towards the place where they are most valuable for
Facebook.
*
I’m left
wondering what will happen when and if this $450 billion penny drops.
Wu’s history of attention merchants shows that there is a suggestive
pattern here: that a boom is more often than not followed by a backlash,
that a period of explosive growth triggers a public and sometimes
legislative reaction. Wu’s first example is the draconian anti-poster
laws introduced in early 20th-century Paris (and still in force – one
reason the city is by contemporary standards undisfigured by ads). As Wu
says, ‘when the commodity in question is access to people’s minds, the
perpetual quest for growth ensures that forms of backlash, both major
and minor, are all but inevitable.’ Wu calls a minor form of this
phenomenon the ‘disenchantment effect’.
Facebook seems vulnerable
to these disenchantment effects. One place they are likely to begin is
in the core area of its business model – ad-selling. The advertising it
sells is ‘programmatic’, i.e. determined by computer algorithms that
match the customer to the advertiser and deliver ads accordingly, via
targeting and/or online auctions. The problem with this from the
customer’s point of view – remember, the customer here is the
advertiser, not the Facebook user – is that a lot of the clicks on these
ads are fake. There is a mismatch of interests here. Facebook wants
clicks, because that’s how it gets paid: when ads are clicked on. But
what if the clicks aren’t real but are instead automated clicks from
fake accounts run by computer bots? This is a well-known problem, which
particularly affects Google, because it’s easy to set up a site, allow
it to host programmatic ads, then set up a bot to click on those ads,
and collect the money that comes rolling in. On Facebook the fraudulent
clicks are more likely to be from competitors trying to drive each
others’ costs up.
The industry publication
Ad Week
estimates the annual cost of click fraud at $7 billion, about a sixth of
the entire market. One single fraud site, Methbot, whose existence was
exposed at the end of last year, uses a network of hacked computers to
generate between three and five million dollars’ worth of fraudulent
clicks every day. Estimates of fraudulent traffic’s market share are
variable, with some guesses coming in at around 50 per cent; some
website owners say their own data indicates a fraudulent-click rate of
90 per cent. This is by no means entirely Facebook’s problem, but it
isn’t hard to imagine how it could lead to a big revolt against ‘ad
tech’, as this technology is generally known, on the part of the
companies who are paying for it. I’ve heard academics in the field say
that there is a form of corporate groupthink in the world of the big
buyers of advertising, who are currently responsible for directing large
parts of their budgets towards Facebook. That mindset could change.
Also, many of Facebook’s metrics are tilted to catch the light at the
angle which makes them look shiniest. A video is counted as ‘viewed’ on
Facebook if it runs for three seconds, even if the user is scrolling
past it in her news feed and even if the sound is off. Many Facebook
videos with hundreds of thousands of ‘views’, if counted by the
techniques that are used to count television audiences, would have no
viewers at all.
A customers’ revolt could overlap with a backlash
from regulators and governments. Google and Facebook have what amounts
to a monopoly on digital advertising. That monopoly power is becoming
more and more important as advertising spend migrates online. Between
them, they have already destroyed large sections of the newspaper
industry. Facebook has done a huge amount to lower the quality of public
debate and to ensure that it is easier than ever before to tell what
Hitler approvingly called ‘big lies’ and broadcast them to a big
audience. The company has no business need to care about that, but it is
the kind of issue that could attract the attention of regulators.
That isn’t the only external threat to the Google/Facebook
duopoly. The US attitude to anti-trust law was shaped by Robert Bork,
the judge whom Reagan nominated for the Supreme Court but the Senate
failed to confirm. Bork’s most influential legal stance came in the area
of competition law. He promulgated the doctrine that the only form of
anti-competitive action which matters concerns the prices paid by
consumers. His idea was that if the price is falling that means the
market is working, and no questions of monopoly need be addressed. This
philosophy still shapes regulatory attitudes in the US and it’s the
reason Amazon, for instance, has been left alone by regulators despite
the manifestly monopolistic position it holds in the world of online
retail, books especially.
The big internet enterprises seem
invulnerable on these narrow grounds. Or they do until you consider the
question of individualised pricing. The huge data trail we all leave
behind as we move around the internet is increasingly used to target us
with prices which aren’t like the tags attached to goods in a shop. On
the contrary, they are dynamic, moving with our perceived ability to
pay.
5
Four researchers based in Spain studied the phenomenon by creating
automated personas to behave as if, in one case, ‘budget conscious’ and
in another ‘affluent’, and then checking to see if their different
behaviour led to different prices. It did: a search for headphones
returned a set of results which were on average four times more
expensive for the affluent persona. An airline-ticket discount site
charged higher fares to the affluent consumer. In general, the location
of the searcher caused prices to vary by as much as 166 per cent. So in
short, yes, personalised prices are a thing, and the ability to create
them depends on tracking us across the internet. That seems to me a
prima facie violation of the American post-Bork monopoly laws, focused
as they are entirely on price. It’s sort of funny, and also sort of
grotesque, that an unprecedentedly huge apparatus of consumer
surveillance is fine, apparently, but an unprecedentedly huge apparatus
of consumer surveillance which results in some people paying higher
prices may well be illegal.
Perhaps the biggest potential threat
to Facebook is that its users might go off it. Two billion monthly
active users is a lot of people, and the ‘network effects’ – the scale
of the connectivity – are, obviously, extraordinary. But there are other
internet companies which connect people on the same scale – Snapchat
has 166 million daily users, Twitter 328 million monthly users – and as
we’ve seen in the disappearance of Myspace, the onetime leader in social
media, when people change their minds about a service, they can go off
it hard and fast.
For that reason, were it to be generally
understood that Facebook’s business model is based on surveillance, the
company would be in danger. The one time Facebook did poll its users
about the surveillance model was in 2011, when it proposed a change to
its terms and conditions – the change that underpins the current
template for its use of data. The result of the poll was clear: 90 per
cent of the vote was against the changes. Facebook went ahead and made
them anyway, on the grounds that so few people had voted. No surprise
there, neither in the users’ distaste for surveillance nor in the
company’s indifference to that distaste. But this is something which
could change.
The other thing that could happen at the level of
individual users is that people stop using Facebook because it makes
them unhappy. This isn’t the same issue as the scandal in 2014 when it
turned out that social scientists at the company had deliberately
manipulated some people’s news feeds to see what effect, if any, it had
on their emotions. The resulting paper, published in the
Proceedings of the National Academy of Sciences,
was a study of ‘social contagion’, or the transfer of emotion among
groups of people, as a result of a change in the nature of the stories
seen by 689,003 users of Facebook. ‘When positive expressions were
reduced, people produced fewer positive posts and more negative posts;
when negative expressions were reduced, the opposite pattern occurred.
These results indicate that emotions expressed by others on Facebook
influence our own emotions, constituting experimental evidence for
massive-scale contagion via social networks.’ The scientists seem not to
have considered how this information would be received, and the story
played quite big for a while.
Perhaps the fact that people already
knew this story accidentally deflected attention from what should have
been a bigger scandal, exposed earlier this year in a paper from the
American Journal of Epidemiology.
The paper was titled ‘Association of Facebook Use with Compromised
Well-Being: A Longitudinal Study’. The researchers found quite simply
that the more people use Facebook, the more unhappy they are. A 1 per
cent increase in ‘likes’ and clicks and status updates was correlated
with a 5 to 8 per cent decrease in mental health. In addition, they
found that the positive effect of real-world interactions, which enhance
well-being, was accurately paralleled by the ‘negative associations of
Facebook use’. In effect people were swapping real relationships which
made them feel good for time on Facebook which made them feel bad.
That’s my gloss rather than that of the scientists, who take the trouble
to make it clear that this is a correlation rather than a definite
causal relationship, but they did go so far – unusually far – as to say
that the data ‘suggests a possible trade-off between offline and online
relationships’. This isn’t the first time something like this effect has
been found. To sum up: there is a lot of research showing that Facebook
makes people feel like shit. So maybe, one day, people will stop using
it.
6
*
What, though,
if none of the above happens? What if advertisers don’t rebel,
governments don’t act, users don’t quit, and the good ship Zuckerberg
and all who sail in her continues blithely on? We should look again at
that figure of two billion monthly active users. The total number of
people who have any access to the internet – as broadly defined as
possible, to include the slowest dial-up speeds and creakiest
developing-world mobile service, as well as people who have access but
don’t use it – is three and a half billion. Of those, about 750 million
are in China and Iran, which block Facebook. Russians, about a hundred
million of whom are on the net, tend not to use Facebook because they
prefer their native copycat site VKontakte. So put the potential
audience for the site at 2.6 billion. In developed countries where
Facebook has been present for years, use of the site peaks at about 75
per cent of the population (that’s in the US). That would imply a total
potential audience for Facebook of 1.95 billion. At two billion monthly
active users, Facebook has already gone past that number, and is running
out of connected humans. MartÃnez compares Zuckerberg to Alexander the
Great, weeping because he has no more worlds to conquer. Perhaps this is
one reason for the early signals Zuck has sent about running for
president – the fifty-state pretending-to-give-a-shit tour, the
thoughtful-listening pose he’s photographed in while sharing milkshakes
in (Presidential Ambitions klaxon!) an Iowa diner.
Whatever comes
next will take us back to those two pillars of the company, growth and
monetisation. Growth can only come from connecting new areas of the
planet. An early experiment came in the form of Free Basics, a program
offering internet connectivity to remote villages in India, with the
proviso that the range of sites on offer should be controlled by
Facebook. ‘Who could possibly be against this?’ Zuckerberg wrote in the
Times of India.
The answer: lots and lots of angry Indians. The government ruled that
Facebook shouldn’t be able to ‘shape users’ internet experience’ by
restricting access to the broader internet. A Facebook board member
tweeted that ‘anti-colonialism has been economically catastrophic for
the Indian people for decades. Why stop now?’ As Taplin points out, that
remark ‘unwittingly revealed a previously unspoken truth: Facebook and
Google are the new colonial powers.’
So the growth side of the equation is not without its
challenges, technological as well as political. Google (which has a
similar running-out-of-humans problem) is working on ‘Project Loon’, ‘a
network of balloons travelling on the edge of space, designed to extend
internet connectivity to people in rural and remote areas worldwide’.
Facebook is working on a project involving a solar-powered drone called
the Aquila, which has the wingspan of a commercial airliner, weighs less
than a car, and when cruising uses less energy than a microwave oven.
The idea is that it will circle remote, currently unconnected areas of
the planet, for flights that last as long as three months at a time. It
connects users via laser and was developed in Bridgwater, Somerset.
(Amazon’s drone programme is based in the UK too, near Cambridge. Our
legal regime is pro-drone.) Even the most hardened Facebook sceptic has
to be a little bit impressed by the ambition and energy. But the fact
remains that the next two billion users are going to be hard to find.
That’s
growth, which will mainly happen in the developing world. Here in the
rich world, the focus is more on monetisation, and it’s in this area
that I have to admit something which is probably already apparent. I am
scared of Facebook. The company’s ambition, its ruthlessness, and its
lack of a moral compass scare me. It goes back to that moment of its
creation, Zuckerberg at his keyboard after a few drinks creating a
website to compare people’s appearance, not for any real reason other
than that he was able to do it. That’s the crucial thing about Facebook,
the main thing which isn’t understood about its motivation: it does
things because it can. Zuckerberg knows how to do something, and other
people don’t, so he does it. Motivation of that type doesn’t work in the
Hollywood version of life, so Aaron Sorkin had to give Zuck a motive to
do with social aspiration and rejection. But that’s wrong, completely
wrong. He isn’t motivated by that kind of garden-variety psychology. He
does this because he can, and justifications about ‘connection’ and
‘community’ are ex post facto rationalisations. The drive is simpler and
more basic. That’s why the impulse to growth has been so fundamental to
the company, which is in many respects more like a virus than it is
like a business. Grow and multiply and monetise. Why? There is no why.
Because.
Automation and artificial intelligence are going to have a
big impact in all kinds of worlds. These technologies are new and real
and they are coming soon. Facebook is deeply interested in these trends.
We don’t know where this is going, we don’t know what the social costs
and consequences will be, we don’t know what will be the next area of
life to be hollowed out, the next business model to be destroyed, the
next company to go the way of Polaroid or the next business to go the
way of journalism or the next set of tools and techniques to become
available to the people who used Facebook to manipulate the elections of
2016. We just don’t know what’s next, but we know it’s likely to be
consequential, and that a big part will be played by the world’s biggest
social network. On the evidence of Facebook’s actions so far, it’s
impossible to face this prospect without unease