First was web 1.0, the original form: A network of pages tied together by hyperlinks, interconnected in a self-referencing mesh. Then came web 2.0, the social web: blogs, forums, social media platforms, endless channels filled with user-generated content curated by and for the same audience that made it.
Today the seeds of web 3.0 are starting to take root. The third iteration of web technology brings us the decentralized web: a place where services are distributed rather than localized, where users own and control their own data, and where smaller players take back power from corporate giants like Google and Amazon.
The exact meaning of web 3.0, and how it will change our digital lives, is still in the process of being defined, but web decentralization is a trend that exists in the present, and an idea that is gradually taking hold. It represents an important progression for digital culture, but also a return to the values of the original web, where autonomy and creative expression were decoupled from commercial interest.
Here, we unpack the principles of the decentralized web: the problems it addresses, the solutions it offers, and the future that it suggests.
What is the web?
The internet is not the web like climate is not the weather. One is a big, ambient, relatively stable system, while the other is a component of that system, subject to trends. The internet is the network itself: the vast system of wires and cables and radio signals and data transport protocols that allows for connection between the billions of online devices across the world. The web is a way to send a certain type of information over the internet using a protocol called HTTP—information that generally constitutes websites.
To understand decentralization we first need to understand centralization, which means thinking about where the web is. We don’t usually think about the web as being anywhere, particularly, or else we think of the web as some nebulous place in the ether that we can visit with our computers or phones. But the web exists on other computers, called servers, which are connected to the internet all the time, and whose job it is to “serve” us web pages when we request them.
The modern web is built on the “client-server model,” a phrase you may not have heard, but which is the underlying principle of every website you’ve visited. The client is the computer that sends a request for information—bearing in mind that these days a computer might just as well be a smartphone, or a fridge, or even a car—and in turn, the server responds with data of some kind, like the text and images of a webpage, a route suggestion from Google Maps, or the contents of a file to be downloaded.
Here’s an important point: No matter how many clients connect to the server simultaneously, the server will be communicating with each client, but the clients don’t communicate directly with each other (even though they might be closer physically than they are to the server).
The server will always occupy a central position in the communication. Unsurprisingly, this creates a centralized model of web services.
What’s the problem?
Since the server occupies this critical position in the client-server model, it becomes a single point of failure for the website as a whole. One way that happens is when a server receives too many requests in a short period of time, so it’s unable to respond to new requests. In the case of a website, anyone trying to access the page after this point will receive an error message.
The limit in server capacity is what’s exploited in DDoS (distributed denial of service) attacks, where a hacker uses a network of hijacked computers to repeatedly request information from a website, eventually taking it offline. But it can also happen unintentionally, like when an article from a usually low-traffic website goes viral, overloading the site with visitors and eventually taking it offline.
Apart from the limiting factors of client-server architecture, there’s a higher level problem with web centralization that stems from the increasing consolidation of web hosting providers.
Currently, Amazon’s AWS web hosting accounts for around 34 percent of the cloud infrastructure market, more than the next four biggest players (Google, Microsoft, IBM, Alibaba) combined. This means Amazon’s services are critical to the operation of many other websites and applications that we don’t usually think of as connected to Amazon—as was shown in February 2017, when a small error by an AWS employee took down some of the biggest-name web services across America.
While much of the early web was hosted on computers that were owned and operated by the people who created the content, more and more of the web is now hosted on servers belonging to a small group of giant companies. Paradoxically, while there’s a greater diversity of content and services online than ever before, control over the hosting and distribution of these services is also more concentrated than it has ever been.
Our experience as Internet users is now largely shaped by the influence of the Big Five tech companies: Microsoft, Google, Facebook, Apple and Amazon. As early as 2012, science-fiction author and technology futurist Bruce Sterling said that these five “stacks” were becoming the internet. For similar reasons, Tim Berners-Lee, inventor of the HTTP protocol and de facto father of the web, has spoken frequently about the need to actively resist web centralization so as to keep his creation in the hands of the people.
What’s the alternative?
Since you’re reading BREAKER, you’ve probably heard the phrase “peer-to-peer” somewhere on your journey into blockchain technology. It’s in the title of Satoshi Nakamoto’s white paper “Bitcoin: A Peer-to-Peer Electronic Cash System” (which we also explained), and before that, was prevalent in the heyday of Napster, Kazaa, LimeWire, and other file sharing services.
Here’s how it works. If I watch a movie through Netflix, my laptop connects to Netflix’s servers (which incidentally are running on AWS), and these servers send me a stream of video data for the movie I want to see. If an error with either Netflix or AWS takes the service down, as occasionally happens, there’s no way for me to stream the movie.
If instead I download the movie with software like BitTorrent—a peer-to-peer model—then my computer obtains the data directly from other people who already have the file I want and are willing to share it via torrent (called “seeding”).
In the process of downloading a movie I might be connected to 20 different “peer” computers simultaneously, each of which uses a small portion of its bandwidth to send me data. Over the course of the download some of these peers might go offline, while new peers sharing the same file come online. As long as I can connect to at least one other computer hosting the file, the download will continue, and I get to watch my movie.
The maxim of peer-to-peer systems is that the loss of any individual node should not disrupt the network as a whole.
Because the file sharing is done through a peer-to-peer network it’s also very hard to deliberately take down (which is why illegal file sharing proliferates, despite decades of efforts from music and film industry lobbies to prevent it). The maxim of peer-to-peer systems is that the loss of any individual node should not disrupt the network as a whole.
The basic idea of the decentralized web is that some of the principles of peer-to-peer networking can be applied to websites and web applications too.
One example of this is Beaker browser (no connection to BREAKER). Described as “an experimental browser for building and exploring the peer-to-peer web,” the innovation of the browser is to let users create websites that are shared directly from the browser, without being hosted on an external server.
When I visit a Beaker-compatible website using the browser, not only do I receive a copy of the website’s data, but I can seed it for other users visiting the same site (you automatically make the files available for download for other people who host the site). This is a neat trick with regards to the problem of server capacity limit: Under a peer-to-peer system, the more people who visit a website, the greater the capacity to accommodate new users, as each user becomes a potential server as well as a client.
And it’s not just static websites that can be hosted in a decentralized way. The Ethereum network is famous for its decentralized applications or “dapps”—web applications that use the distributed storage of the Ethereum blockchain to send and receive data.
So what are dapps?
To understand dapps, we need to return to the architecture of web services again.
Think about Facebook: The front end HTML/CSS/JS code tells your browser to display the blue activity bar at the top, the white scrolling newsfeed, a sidebar of apps and events, and so on—a basic template that is the same for each user. This template is then populated with content from your specific friends, groups and page likes, all information that is stored in a database (on Facebook’s servers), and read by back end software (also on Facebook’s servers), which makes decisions about what to show you based on who you are, where you are, when you last logged in, and many other factors we don’t exactly know—which together make up the notorious newsfeed algorithm.
Decentralized applications have the same division between front end design, back end logic and stored data, but switch out centralized servers for the distributed nodes of a blockchain. Users connect to a dapp with a specialized browser—either a bespoke decentralized web browser like Blockstack, or through a plugin like Metamask—and this browser interacts with the back end logic of a software program that can run on a distributed network, called a smart contract. In turn, the smart contract reads and writes data to the blockchain, which stores information in the place of a conventional database.
Decentralized applications do have disadvantages (which we won’t go into for now). But crucially, they place greater emphasis on users owning their own data and being able to port data and identity freely between different services. Which brings us neatly to the next question.
What happens to the big platforms?
Unfortunately, modifying the system architecture of modern web applications isn’t sufficient to achieve a democratic, decentralized web on its own. Besides reconfiguring the way we deliver data, web decentralization is also about resisting the power a small number of players, for example the Big Five tech companies mentioned before. Doing this means setting new standards for the way we own and interact with our data, as well as for the way we store it.
Let’s compare two forms of text communication: email and a Facebook message.
According to its own statistics, Facebook had almost 1.5 billion daily users as of September 2018. That’s almost 20 percent of the world’s population, one in every five human beings, logging onto the social network each day. In fact, at the turn of the 20th century, 1.5 billion people would have been close to every living person on the planet.
Facebook has become an essential communication tool for friends, families, and businesses alike, weaving itself so deeply into the fabric of our digital lives that it can be hard to get by without it. But there have been many substantive criticisms of how the company operates, including its experiments with nudging users into negative emotions, role in the persecution of the Rohingya in Myanmar, use as a tool of Russian electoral influence in the US, and aggressive PR practices to deflect the above claims.
Many people—the author included—have considered quitting Facebook as a response, and struggled to do so. The problem is that for the moment there’s no real alternative, at least not one that allows us to communicate with the many people who still choose to use it. Although any user can download their Facebook data, it can’t be integrated with any other platform after the fact: The choice offered to us is essentially “use it or lose it.”
Thankfully, we don’t have to look to cutting-edge web technology to see what an alternative system would look like.
The humble email, both joy and curse of our online lives, is a prime example of a decentralized communication service. There are many different email providers, but the email format isn’t owned by anyone. It’s an open protocol for how messages are sent, and so can be used by any company that conforms to these rules. If my email account is at Gmail but you don’t like Google, you’re free to choose any other provider and we can still communicate. In fact, if you don’t like any of the commercial email providers, with a bit of technical know-how you can set up your own private email server and roll solo, Hillary Clinton-style. Switching from one provider to another is as simple as exporting your address book and message history then importing them into the new account. Collectively, these features have made email one of the most robust and enduring forms of digital communication, still an integral part of personal and professional life more than 40 years after it was invented.
Unfortunately, there isn’t yet a decentralized Facebook, but there is a Twitter equivalent that is slowly growing in popularity. Mastodon is an open-source, decentralized social network based on a federated model. This means anyone can download the software and set up their own Mastodon server, enforcing whatever rules they like for the users who join it, e.g. “cursing is not allowed,” “no content will be censored,” or in one case, “you can’t use the letter e.” Each server is its own environment: Users within a server can talk to one another, and the server administrator can choose which other servers to federate with, i.e. allow mutual contact between users. This also means that entire servers can be blocked from communicating with each other (for example, users of the “no cursing” server might decide to be cut off from the totally uncensored one). The key point is, if a user doesn’t like the way a server is being run, they are free to simply leave and join a new one.
Mastodon has a small user base, currently somewhere in the region of 1.75 million according to the network monitoring tool. But it doesn’t need to grow, aside from the fact that it’s a project worth sharing. There’s no VC money behind it, no investors expecting ROI, no board of directors demanding that growth targets will be met. Because of this, there’s also no need to sell ads to bring in revenue: Server administrators just need enough money to cover hosting costs, which can be in the region of $15-20 per month for an instance with a few thousand users.
Projects mentioned in this article, like Mastodon, Beaker browser, Blockstack, Metamask, and the Ethereum network are all helping to build the decentralized web today. Also worth mention are decentralized data storage projects, like IPFS, Filecoin and Storj, which provide critical infrastructure for decentralized apps, and of course bitcoin, the first project that conclusively proved we could make international financial transactions without needing centralized intermediaries.
We’re always looking for more decentralized web services we should cover, so if you want to put a certain project on our radar, tweet @breakermag and let’s talk. We’re excited about the future of the web, too.