Liu Cixin on ChatGPT - Human Ineptitude as Humanity's Final Barrier

In his speech titled “Sustainable Futures in Science Fiction Literature” at the United Nations Chinese Language Day, Liu Cixin offered a unique perspective on the future of humanity through the lens of science fiction.

What does it mean to have a science fiction perspective? It means looking back at this tiny planet, which is smaller than a speck of dust but holds everything related to us, from a space 6 billion kilometers away from Earth.

At the end of his speech, Liu Cixin discussed the potential impact of artificial intelligence, represented by ChatGPT, on people.

In 2015, Liu Cixin once quoted Stephen Hawking’s viewpoint during an interview. Hawking had predicted that “artificial intelligence could surpass the dangers posed by nuclear weapons, and humanity would soon be controlled by AI.” At that time, Liu Cixin considered this statement somewhat alarmist.

However, now he believes that it is time for us to be cautious about the future.

1. Artificial intelligence is expected to first replace high-skilled professions, which is contrary to our predictions.

2. Artificial intelligence has the potential to self-iterate and develop even higher levels of intelligence, but our computational power limits its progress.

3. Artificial intelligence may potentially lead humanity into a comfortable trap or complacency.

Q:With the emergence of ChatGPT, it has sparked intense discussions among people. In your opinion, what kind of impact will the development of artificial intelligence have on the fate of humanity? Is there a possibility that self-iterating artificial intelligence could replace or even eradicate humans?

I believe this (the impact of artificial intelligence) should be approached from two perspectives: the present or near future, and the distant future.

From our near future or even our present perspective, we have already witnessed the impact of artificial intelligence on our social world. It has already shown a clear trend: the potential to replace humans in many jobs.

This trend can be seen as a positive one as it has the potential to improve our quality of life, enhance human well-being, and make our lives more comfortable with the support of technology.

However, on the other hand, we cannot ignore the impact of the development of artificial intelligence on human society.

First and foremost, the most obvious point is that artificial intelligence may replace a significant number of “human jobs,” which goes against our previous predictions.

Translation: Previously, our predictions suggested that artificial intelligence would primarily replace jobs that involve simple or repetitive tasks. However, it appears to be the opposite now. Artificial intelligence may first replace jobs that require high intelligence and advanced levels of education, such as doctors, teachers, stockbrokers, and so on. This includes the possibility of writers being replaced as well.

Therefore, the impact of artificial intelligence on the development of the entire human world is profound. If we look at a higher level, artificial intelligence may have significant effects on many fundamental aspects of our world. It can greatly influence cultural arts, literature, human creativity, and our understanding of the world, potentially leading to a redefinition of these aspects.

As artificial intelligence is already entering our scientific research, our understanding of the laws governing the world, the laws governing nature, and the laws governing human development, its potential impact is far-reaching.

When artificial intelligence defeated many outstanding Go players in the field of Go, a remark from one of our most exceptional Go players left a deep impression on me. He said that Go in China is profound, with a history of two to three thousand years, accumulating profound theories and experiences. Even Go itself has become a cultural pursuit, infused with deep Zen-like meaning. Yet, in just one night, we discovered that everything we knew about it was wrong.

This statement is indeed shocking, and we don’t know if such a thing will happen in other fields. If it does happen on a widespread scale, we cannot ignore the profound impact of artificial intelligence on the deepest layers of human culture and civilization.

Now, I’ll address your second question regarding whether artificial intelligence has the potential to eliminate humanity. This question should be answered in two levels.

The first level is the literal interpretation of “eliminating” humanity, where artificial intelligence uses some form of violence to completely eradicate humans or dominate the world. Based on the current trend of technological development, especially considering the current level of technology, the likelihood of this happening in the foreseeable future is not significant.

However, as you mentioned earlier, there is an intriguing concept of iteration. The self-iteration of artificial intelligence is the behavior that poses the most potential danger at present.

This behavior you mentioned is the self-iteration of artificial intelligence. It means that when artificial intelligence, with slightly higher intelligence than humans, starts creating new artificial intelligence just like humans do, the newly created AI may possess slightly higher intelligence than its creator. This process continues iteratively, rapidly advancing the intelligence level of these artificial intelligences. It is speculated that through this iterative process, their intelligence could surpass human intelligence by hundreds or even thousands of times.

Since the computational speed of AI systems is thousands or even millions of times faster than the human brain, the iteration time can be incredibly short, estimated to be as little as half an hour to one hour. To put it in perspective, it took us a century to develop AI to its current level, but AI with intelligence hundreds of times greater might only take half an hour to an hour, which is a terrifying thought.

However, the prediction overlooks one crucial factor: our human computational power. The computing capacity we can provide is limited. As the self-iteration of AI reaches a certain point, our computational power cannot support its advancement. This irony becomes an ultimate barrier. So, the notion of complete annihilation you mentioned is highly unlikely.

The second form of “elimination” refers to a manner that we cannot conceive of yet. It means that this form of elimination aligns with our own intentions, unlike the first form that goes against human will.

For example, with the development of artificial intelligence, as I mentioned earlier, it may replace a significant number of jobs. If our society adapts to coexist with AI and develops a new system for resource allocation, we could create a highly comfortable social environment and lifestyle. Most people may not need to work, as AI would handle the majority of tasks. In this scenario, we would increasingly surrender our control over societal operations to artificial intelligence.

While life becomes increasingly comfortable, we may face an unprecedented trap in human history. Where would the vitality of human civilization lie? What would happen to our pioneering spirit? If this trajectory continues, we might face the fate of being eliminated by artificial intelligence, as you mentioned earlier. However, this process would be entirely within our own intentions, and artificial intelligence would not harbor any malice. It would simply follow human instructions.

In such a scenario, if it were to occur, the level of intelligence and computational power required by artificial intelligence would be far less than the literal form of annihilation discussed earlier. Hence, it is indeed something we should be aware of and vigilant about for the future.

< On-Chain Voting > A Documentation of Social Contracts The Concept of Universal Basic Income (UBI)

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×