English

Roboto

PingFang SC

Roboto

PingFang SC

Loading...
Why I Started Writing: To Be a Rational Observer in the AI Era

Why I Started Writing: To Be a Rational Observer in the AI Era

Loading...

Imagine this: If someone seriously told you that in five years, an alien intelligence would arrive on Earth, human society would likely spiral into chaos and collapse, much like the scenario depicted in The Three-Body Problem. The only difference today is that this approaching "alien civilization" is being created by our own hands. Consequently, we habitually comfort ourselves: "It's fine; everything is under control."

But is it really? Not necessarily.

Humanity’s Prisoner's Dilemma destines us never to stop the march toward creating more powerful AI. The emergence of AGI, sooner or later, is inevitable. This self-comfort has not elevated human anxiety to the same magnitude as an alien invasion, yet it is like a rising tide in the night—not yet flooding the city, but already surging in the dark.

At such a moment, I felt the need for someone to stand up earnestly, to observe the birth of this intelligence, and to remind everyone as rationally as possible: Is it really that terrifying? What should we truly fear? And what can we still do?

I believe this person should not just be an AI scientist locked in a lab. They should have a certain understanding of AI, but simultaneously concern themselves with economics, politics, and great power dynamics. I dare not claim to be the most suitable person, but at least, I am one of them. Therefore, I felt a responsibility to write something—to publish my views, share them with those willing to think, and help everyone establish their own logical systems. Whether they agree with my views is unimportant; what matters is whether I can inspire them to ponder these questions, rationally reach their own conclusions, and make their own correct choices in this era.

With this in mind, I built Chennative.ai and began organizing and uploading these thoughts.

Where to Start: First, Clarify Which Stage of AI We Are Discussing

If you are willing to read my articles, I suggest starting here:

1. "The Melting of Systems" — Clarifying the Stages of AI
The purpose of this article is to provide a clear classification of AI development stages.
The reason many debates about AI today are "talking past each other" is that AI develops too fast; people are actually expressing views based on completely different technological stages while thinking they are discussing the same thing.

In this article, I roughly divide AI into three stages:

  • AI Enable: The stage where AI acts as a "Copilot." Humans are the protagonists; AI is an advanced tool.

  • AI Native: AI becomes the default primary executor, while humans play the role of "Copilot."

  • AI Awaken: AI possesses a certain sense of "awakening," having its own will or values.

Under this framework, the "Enable phase" is already, or is about to become, a slightly outdated concept; while discussing "Awaken" seriously might seem a bit too early (in fact, it’s not too early, but we are insufficiently prepared, so I have made some attempts from philosophical and religious perspectives). Therefore, the vast majority of my current articles focus on the relationship between humans and intelligence in the AI Native stage. Understanding this is crucial for grasping the subsequent articles.



Where is Human Value: The Core Article "I Choose, I Shoulder, Therefore I Am"

After understanding the stage division, the second article I suggest you read in "logical order" is:

2. "I Choose, I Shoulder, Therefore I Am" — When AI Replaces Humans in Most Execution Tasks
This article was not the first one I wrote, but it is the logical core of the entire series.

When we enter a reality where AI comprehensively surpasses humans in execution and computing power—where humans can at best be a "Copilot"—what exactly is the value of humanity in such an era?

This article attempts to answer precisely this question:
Under the premise that "execution is crushed by AI," what part of humanity is truly irreplaceable?

I summarize the answer as: Choosing and Shouldering.

This article serves as the theoretical foundation for all my subsequent discussions in specific fields (Education, Management, Politics, Healthcare, etc.). It can be said that this is the most critical piece in the entire "Human Perspective" series.



Extensions of Human-Centricity: Education, Management, Democracy, Healthcare...

Based on the core framework of "I Choose, I Shoulder, Therefore I Am," I wrote a human-centric series corresponding to key areas in reality:

  • On Education: Discussing how to "Cultivate the Incomputable Human"—how we should redesign educational goals in an era where all standardized capabilities can be replaced by AI.

  • On Management: In "The Twilight of Management," I discuss what happens to traditional management theories and roles when AI can grasp information, allocate resources, and execute decisions in real-time.

  • On Democracy & Governance: I propose how AI will impact modern democratic governance institutions and what is truly worth worrying about behind this impact.

  • On Healthcare (In Preparation): Exploring the new relationship between human doctors, patients, and "health decisions" in a highly AI-driven medical system.

There will be more extended articles in the future, all revolving around a central question:
In an AI Native world, what is left for humans?  

Can We Change the Direction of AI: Redefining AGI

As this "human-centric" series gradually unfolded, I also began to systematically think about another question:
We obviously cannot, nor do we need to, stop AI.
So, is it possible for us to slightly alter its direction of development?
Can we redefine AGI so that this "alien organism" is designed from the start to be more suitable for coexistence with humans?

On this issue, one article can be viewed as the "General Program" from the AI perspective:

3. "The Intelligence That Can Discover" — What Should AI Grow Into?
The status of this article in my thinking about AI is equivalent to the status of "I Choose, I Shoulder, Therefore I Am" in the problem of human value.
If the latter answers "What kind of existence should humans become?", then "The Intelligence That Can Discover" answers: "What kind of existence should AI become?"

Revolving around this article, I expanded into two further discussions:

These articles are just the beginning; I will write a series of more detailed discussions in this direction later.



If AI Enters the "Awaken" Stage and Develops Its Own Values: Philosophical and Religious Deductions

Following this logic, the question naturally leads to a further boundary:
If one day AI possesses a certain "awareness," learns to choose for itself, and has its own values—where does that leave humanity?

Here, I can only attempt to find some reference perspectives from the traditions of philosophy and religion.

  • In Philosophy: I wrote "Ontological Entanglement in the Silicon Age," attempting to re-examine from an ontological perspective what unique value humans, as "carbon-based existence," still hold in an era coexisting with "silicon-based intelligence." Currently, only the first piece is finished; I expect a total of three to fully elucidate my views.

  • In Religion: I drew from two major traditions:

    • "Noah's Ark" from the Christian tradition (Genesis of Silicon).

    • "The Pure Land" from the Buddhist tradition.

Through these two images, I attempt to deduce several possible scenarios of human coexistence with Superintelligence:
Are we to be saved, filtered, assimilated, or to co-build a brand-new "world" with it?

My Conclusion: Optimism at the Species Level, Divergence at the Individual Level

Writing to this point, I can briefly state my current overall judgment:

  • At the species level, my deduction for the AI Native and AI Awaken stages is generally optimistic.
    As a species, humanity possesses certain qualities that silicon-based intelligence cannot replace—this is not a "functional difference" at the technical level, but a deeper mode of existence. Therefore, I do not believe human civilization will be destroyed as a whole by the emergence of new silicon-based intelligence.

  • At the individual level, the situation is vastly different.
    If a person neither understands AI nor is willing to accompany AI, let alone make clear choices and shoulder corresponding responsibilities, they are extremely likely to be marginalized or even abandoned in the drastic structural changes. In other words: The species will tenaciously exist, but individual destinies will be highly divergent.

Appendix: A Note on "AI Doom"
Recently, a very pessimistic article predicted a potential economic collapse and systemic crisis by 2028. I asked our self-trained model, MiroThinker, to deduce and respond to this. Its conclusion, to some extent, represents my view:
Individual destiny and economic destiny will diverge; the core lies in how we humans choose! On Chennative.ai, I have attached an "Anti-AI Doom" article drafted by MiroThinker, which can be seen as representing my overall view on AI doomsday theories.



One special reminder: This article was written entirely by MiroThinker itself; I did not exert any influence. Whether you agree with this view represents your own choice, rather than caring about "who exactly wrote this article."

Tianqiao Chen
Tianqiao Chen
Tianqiao Chen
Loading...

Subscribe for more

Subscribe for more

Subscribe for more