Original Link : https://medium.com/the-philosophers-stone/what-happens-when-meaning-goes-away-ece7156a8368

A.I., biotechnology, the loss of meaning, and the future of the world

Humanity has made tremendous strides in reining in plague, famine and war in recent decades.

Take just one example: poverty. Since 1990, on average, with each passing day, there are 130,000 fewer people living in poverty. Two centuries ago, 90% of the population lived in extreme poverty. In 1950, 75% of the world was still living in extreme poverty. Today, those living in extreme poverty represent less than 10% of the world’s population.

As Yuval Harari writes,

For the first time in history, more people die today from eating too much than from eating too little; more people die from old age than from infectious diseases; and more people commit suicide than are killed by soldiers, terrorists and criminals combined. In the early twenty-first century, the average human is far more likely to die from binging at McDonald’s than from drought, Ebola or an al-Qaeda attack.

But what will replace our fight against the tide of entropy, against the “uncontrollable forces of nature,” at the top of human agenda?

The message at the beginning of Yuval Harari’s book Homo Deus: A Brief History of Tomorrow is similar to that of Steven Pinker’s Enlightenment Now and Hans Rosling’s Factfulness — most observers are wrong about the state of the world, and we have, in fact, made truly unprecedented progress in recent decades.

But, while Pinker and Rosling focus on the people’s flawed perception of reality, Harari aims to answer a bigger and, indeed, more important question: “In a healthy, prosperous and harmonious world, what will demand our attention and ingenuity?”

In the past, prophets and thinkers believed that humanity’s scourges — such as disease, war and famine — were integral parts of God’s cosmic plan and that we should not go against God’s will by trying to address these challenges. Now, at the dawn of the third millennium, famines, plague and war are no longer incomprehensible and uncontrollable forces of nature: we realize that these problems are manageable and, with an effort, could be completely eradicated.

But what will replace our fight against the tide of entropy, against the “uncontrollable forces of nature,” at the top of human agenda? Harari thinks that we are on the path towards attaining “bliss, immortality, and divinity.” But there will be a huge price to pay for our unconstrained and unlimited progress.


For decades, humanity’s primary goal has been to increase people’s happiness. So far, we have accomplished this by minimizing the amount of unhappinessYet, when all potential sources of unhappiness are gone, how will we enhance our happiness?

The US is one of the most developed countries in the world, and its GDP has risen from $2 trillion to $20 trillion from 1950 to today. Real per capita income has doubled. But, as Harari notes, “studies have shown that American subjective well-being levels in the 1990s remained roughly the same as they were in the 1950s.” ”It appears that our happiness bangs against some mysterious glass ceiling that does not allow it to grow,” he writes, “despite all our unprecedented accomplishments. Even if we provide free food for everybody, cure all diseases and ensure world peace, it won’t necessarily shatter that glass ceiling.”

As Arnold Schwarzenegger says, “Money doesn’t make you happy.” “I now have $50 million but I was just as happy when I had $48 million,” he has commented. After a certain point, no amount of money or other source of joy can make a person substantially happier.

Will our blind pursuit of happiness end in the biological reengineering of humans, so that people faced with the loss of jobs and meaning due to AI can enjoy everlasting pleasure?

Science contends that happiness is determined by biochemical processes. Hence, if we want to gain a more lasting feeling of contentment, we have to rig our system. In the words of Harari, “Forget economic growth, social reforms and political revolutions: in order to raise global happiness levels, we need to manipulate human biochemistry.”

Already, increasing numbers of people are taking various biochemical stimulants to pursue happiness (drugs) and minimize unhappiness (antidepressants, sleeping pills, Ritalin). And research laboratories are already working on other, more direct and efficient ways to trigger happiness, through the manipulation of human biochemistry (for instance, by sending electrical signals to appropriate spots in the brain). That might bring us closer to Aldous Huxley’s Brave New World.

The main objective of progress is to increase people’s utility. But, if we can attain the maximum possible utility through biochemical manipulations, why do we need progress at all? Will our blind pursuit of happiness end in the biological reengineering of humans, so that people faced with the loss of jobs and meaning due to AI can enjoy everlasting pleasure?


Harari believes that the emergence of AI marks the beginning of a completely new era in human evolution. First of all, because, in Harari’s view, “our feelings are not some uniquely human spiritual quality[;] they are biochemical mechanisms that all mammals and birds use in order to make decisions by quickly calculating probabilities of survival and reproduction.” Second, the convergence of the data revolution and advances in biotechnology will produce “external systems that can monitor and understand my feelings much better than I can.”

He thinks that we will eventually have algorithms that will know us better than we do. “Liberalism will collapse on the day the system knows me better than I know myself.” Such algorithms will not only empower authoritarian states by providing Orwellian tools of surveillance, but will also pose a challenge to the democratic system, because voters will be increasingly manipulated by political propaganda and advertising and more and more life decisions will be taken by algorithms. Ultimately, people will be forced to give up decision-making to algorithms.

The AI revolution will bring about even more sweeping changes — what new ideologies and movements will appear?

This threat will be exacerbated by profound socioeconomic changes induced by AI. We are on the verge of a new technological revolution that threatens to displace us from most, if not all, jobs, creating a permanent useless class.

In the past, new technologies, although they removed many jobs, also created new ones through deskilling. As Kai-Fu Lee writes in AI Superpowers, during the Industrial Revolution, “factories took tasks that once required high-skilled workers (for example, handcrafting textiles) and broke the work down into far simpler tasks that could be done by low skilled workers (operating a steam-driven power loom).” Hence, overall employment increased.

But AI will not result in deskilling: instead, its skill bias (there will be more demand for people with advanced skills, such as AI engineers and data scientists, but less demand for blue-collar workers) will make the majority of the workforce obsolete.

As AIs replace humans for mechanical tasks and gradually surpass us in cognitive abilities, we will have to deal with the demands of permanently unemployed people. In the nineteenth century, communism and socialism emerged due to capitalism’s inability to grapple with the disgruntled working class. The AI revolution will bring about even more sweeping changes — what new ideologies and movements will appear?

In the past, governments needed millions of healthy workers and soldiers, in order to wage successful wars of conquest and collect more tax revenue to fill state coffers. As humans lose value, however, they will be replaced by algorithms and robots and the elites will simply not need them.

While billions are faced with the loss of their jobs — their source not only of income, but of meaning, there will still be people who are,

indispensable and undecipherable, but they will constitute a small and privileged elite of upgraded humans. These superhumans [whom Harari dubbs Homo deus] will enjoy unheard-of abilities and unprecedented creativity, which will allow them to go on making many of the most important decisions in the world. They will perform crucial services for the system, while the system could not understand or manage them. However, most humans will not be upgraded, and they will consequently become an inferior caste, dominated by computer algorithms and the new superhumans.

How will humanism and liberalism cope with the triple threat: the de facto shift of authority from humans to algorithms, the useless class and biological inequality? Harari argues that new religions and ideologies will fill the vacuum left by liberalism.


Harari outlines two main types of techno-religions, which might fill the gap left by liberal democracy: techno-humanism and dataism, or data religion. “Data religion argues that humans have completed their cosmic task, and they should now pass the torch on to entirely new kinds of entities.” Techno-humanism, on the other hand,

agrees that Homo sapiens as we know it has run its historical course and will no longer be relevant in the future, but concludes that we should therefore use technology in order to create Homo Deus — a much superior human model. Homo Deus will retain some essential human features, but will also enjoy upgraded physical and mental abilities that will enable it to hold its own even against the most sophisticated non-conscious algorithms.

In Brief Answers to the Big Questions, Stephen Hawking writes that, when judging human evolution, we should consider not only the evolution of DNA, but also the amount of externally transmitted information. But the speed at which information is being accumulated and the pace of evolutionary changes in DNA are starkly different: Hawking claims that “the rate at which useful information can be added is millions, if not billions, higher than with DNA.”

There are problems with that discrepancy, but an even greater danger is that humans still have the instincts and aggressive impulses that characterized cavemen — and, in the information era, manifestation of these instincts could prove disastrous. (They could lead, for instance, to global nuclear war.) We can’t, therefore, wait for Darwinian evolution to eliminate our atavistic impulses — instead, we should use genetic engineering to speed up the evolution of DNA, thereby bridging the gap between the development of our bodies and our intellectual advancement.

Tech-humanism can help us address many of today’s challenges, by making people less reliant on irrational instincts and beliefs and logical fallacies and more similar to AI — objective, impartial, evidence-based and data-driven.

Dataism, another possible new religion advocated by high-tech gurus and Silicon Valley prophets, contends that “the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data-processing.” Already, humans are like tiny chips in the global flow of data. Every day, billions tweet, post, mail, record and share — dataism regards the flow of information as a supreme value and its main objective is to create an all-encompassing data-processing system.

This idea might seem eccentric, but it is worth noting that, even today, we are increasingly adopting a data science approach to the real world.

Ugly facts have already discredited today’s system. Yet, unless we have a new story, we will not develop new societal structures. Is dataism (or techno-humanism) the future?


According to the dataist view, today’s democratic sociopolitical system is becoming increasingly irrelevant. In the past, liberal democracies have been more successful than centralized dictatorships because of the former’s inherent efficiency when it comes to data processing.

In liberal democracies, there are many relatively small processors (individual consumers, businesses, governments, etc.), whereas, in centralized systems, there is only one: the authoritarian leader and his close circle.

Distributed data processing systems (like liberal democracy) have been more productive than alternative models because, if one processor fails, there is always another player to fill the gap. On the other hand, a mistake in the centralized system can have disastrous consequences. Harari gives an example of this phenomenon:

the Soviet science ministry forced all Soviet biotech laboratories to adopt the theories of Trofim Lysenko — the infamous head of the Lenin Academy for Agricultural Sciences. Lysenko rejected the dominant genetic theories of his day. He insisted that if an organism acquired some new trait during its lifetime, this quality could pass directly to its descendants. This idea flew in the face of Darwinian orthodoxy, but it dovetailed nicely with communist educational principles. It implied that if you could train wheat plants to withstand cold weather, their progenies will also be cold-resistant. Lysenko accordingly sent billions of counter-revolutionary wheat plants to be re-educated in Siberia — and the Soviet Union was soon forced to import more and more flour from the United States.

In an open capitalist system, a company that adopted Lysenko’s pseudoscientific approach would fail, but its competitors would be quick to return the system to equilibrium.

That does not mean that democracy and capitalism are invulnerable. In the past, the most powerful states — such as the Roman empire and the Chinese dynasties — had centralized data-processing, and existed for thousands of years.

Consequently, if data processing conditions change again in the twenty-first century, distributed systems (like liberal democracy) might decline. Harari writes, “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete — not because they are unethical, but because they don’t process data efficiently enough.” Voters and politicians are ignorant about AI and other new technologies and are unable to comprehend the scale of the disruption they will cause.

This tendency is being amplified by a more extensive use of AI in politics and economics. Data is AI’s oil: the more data is supplied, the better AI algorithms are. In centralized systems, all data (for instance, about the state of the economy) will be concentrated in one place: therefore, centralized states will be able to attain more efficiency when it comes to data processing than dispersed systems of decision-making. Autocracies might therefore gain the upper hand over democracies. Jack Ma has already suggested that AI could be used for better centralized economic planning. How will the west cope with an AI-powered, centralized China?

Ugly facts have already discredited today’s system. Yet, unless we have a new story, we will not develop new societal structures. Is dataism (or techno-humanism) the future?

Open societies’ primary advantage is that they allow the free flow of information and opinion, which makes the imposition of a single dogma upon society impossible and facilitates the pursuit of truth.


Yuval Harari warns that “all the scenarios outlined in this book should be understood as possibilities rather than prophecies. When we think about the future, our horizons are usually constrained by present-day ideologies and societal systems.”

Harari’s book skillfully synthesizes material from a variety of fields, such as philosophy, history, politics, economics and biology — as a result, the argument looks flawless and seamless, though, in reality, many of his claims could be challenged, especially his claims about free will.

I believe that we should not try to predict the future. Historical prophecies are beyond the scope of the scientific method. The future depends on us, and we should not constrain our actions or thinking because of any conception of dogmatic historical necessity. Social disciplines, humanities and natural sciences have fundamentally different underpinnings and therefore should not be treated equally.

Applying the scientific method to politics, history and economics is wrong — we cannot make reliable predictions about human development because the course of events is affected by our perception of reality.

Karl Marx’s Das Kapital provides pretty compelling evidence that communist revolution is inevitable.

But Marx missed one important fact: capitalists can read. Marx’s book was not limited to his close circle: his ideas spread around the world. When capitalist states understood the dangers of an enraged proletariat, they took measures to improve the living conditions of the working class. As a result, the communist revolution did not take place in developed countries.

That is why we should not try to forecast the future — even if our predictions are data driven and evidence-based, reality itself will change thanks to our forecasts, and we are still unable to estimate the impact of ideas and predictions on the actual state of affairs.

Although dataism and techno-humanism might be the future, I believe that open societies will be able to peacefully transition to the new order. The ability of open societies to adapt to changing conditions and their willingness to adopt innovations and make changes to the system itself make them resilient. Continuous adaptation to changing technological, socio-political and economic conditions — the ability to accept criticism and take action in order to avoid upheavals that could jeopardize the system — is the backbone of open societies.

The main problem with dogmatic theories is that they posit universally applicable and immutable laws. But human understanding of reality is inherently imperfect. We can never understand reality, because our analyses of the world will always be influenced by our prejudices and biases. Hence, our interpretation of reality rarely corresponds to the actual state of affairs — and thus it is wrong to hold any theory or view to be the only possible explanation.

Open societies’ primary advantage is that they allow the free flow of information and opinion, which makes the imposition of a single dogma upon society impossible and facilitates the pursuit of truth.

Therefore, in this AI era, since we can never predict what precisely will happen, and because we are on the brink of tremendous changes, the best strategy is to strengthen the underlying idea of open societies and focus on short-term problems, rather than long-term prophecies.

We should not waste our time and effort on useless debates about the long term — things never turn out the way we think they will. Let us focus on the present. And let us work on the pursuit of truth — since truth is the only robust entity in our increasingly shaky social environment.