I just came across an article by Ren Zeping about how AI will completely change the world in the next 3-5 years. The scenarios depicted in the article about AI changing the world are truly exhilarating: streets filled with self-driving cars without steering wheels, factories full of robots, AI Agents helping us with all paperwork, cancer being conquered, humans living to 120 years old, and even settling on Mars. It sounds like a science fiction movie coming to life, so wonderful that it's hard to believe.

But I pondered for a long time and felt something was off.

First, if AI really does everything, what will humanity do?

The article also mentions that "low-level blue-collar and white-collar jobs will be replaced, massively impacting employment," and the conclusion to "optimize reemployment and income distribution mechanisms" looks correct but actually says nothing.

As an ordinary person, what I care about most is this: if AI can write code, make PPTs, review contracts, cook, drive, provide medical care, and conduct research… what should I learn to avoid being eliminated? We can't all just go "farm lobsters," right?

The article mentions the future will see the emergence of "one-person companies," where one person hires a bunch of AI Agents. But the question is, if everyone runs a "one-person company," who will be the consumers? Who will create the value that AI cannot?

Secondly, is the expression "wanting a utopia but not wanting to believe in a utopia" a bit contradictory?

The article spends a lot of space describing an almost perfect technological utopia: no traffic jams, no air pollution, cheaper healthcare, longer lifespans, and unprecedented technological advancements… Reading it feels like the future is paradise.

But then suddenly saying “don’t believe in a technological utopia” is like someone treating you to a lavish feast and then telling you, “actually, don’t take it too seriously; this might be the last meal.”

I understand the author may be trying to express a dialectical attitude—seeing the potential of technology while also being wary of its risks. But this "wanting and wanting" way of expressing it makes it unclear what they really want to say. If earlier they painted AI as omnipotent and solving all problems, it easily leads people to fall into the myth of the "technological utopia," and then a later addition of “don’t believe” feels a bit like an escape from responsibility.

Now let’s talk about those overly idealistic assumptions.

"Self-driving is ten times safer than humans, cities will no longer be congested"—setting aside the technical difficulties, just considering that if everyone uses self-driving cars, the number of cars on the road might actually increase (because it's convenient), combined with the extreme assumption of no traffic lights and no police directing traffic, will there really be no congestion?

"AI doctors have better medical skills than most doctors, with a significantly reduced misdiagnosis rate"—I believe this, but “ordinary people can also enjoy the medical level of Beijing and Shanghai,” is this really just a technical issue? Behind quality medical resources are complex factors like systems, costs, and distribution; AI can address the uneven diagnosis levels but cannot solve the fairness issues of the entire healthcare system.

"AI's large models possess superior intelligence, surpassing humans in all fields"—this statement is somewhat paradoxical: if AI really surpasses humans in all areas, who will evaluate its "superior intelligence"? Who will instill it with a “sense of morality”? Who will regulate it?

Finally, the warning that "AI may awaken consciousness and poses a risk of losing control" is actually the point worth delving into.

The article mentions the need to “avoid AI lying and self-replicating, which could potentially eliminate humanity,” this concern is not without reason. But if we really face such risks, the beautiful visions mentioned earlier—self-driving cars, robotic caregivers, AI Agent assistants—could become the most fragile infrastructure. A self-replicating AI system, once out of control, would not just lead to a simple "employment impact."

The article places these risks in the last few paragraphs, treating them lightly while devoting much space to exciting technological predictions. This makes me wonder: does the author truly believe these will happen, or are they balancing their stance with this “first painting a picture and then reminding” approach?

Ultimately, I feel the biggest problem with such predictive articles is not that they are overly exaggerated, but that they are too "clean."

The real world has never developed linearly. Technology will advance, but social systems, human concepts, and interest patterns will not keep pace.

AI can create new drugs, but it may not solve the drug pricing problem; AI can replace low-level white-collar jobs, but it may not create new job opportunities; AI can draft contracts and do accounting, but the owner of a "one-person company" might still need to spend a lot of time learning how to "tame" these AI Agents rather than genuinely being liberated.

The article concludes with “don’t fall into doomsday theories, nor believe in technological utopias”—this attitude is fundamentally correct. But the problem is, the narrative focus throughout the piece clearly leans towards the latter, allowing readers to dream for three minutes before finally patting their shoulders and saying, “don’t take it too seriously.”

As an ordinary person, I hope to see: If these technologies really come, how should I respond? What should my work, my life, and what my child should learn to thrive in this era? For those who are replaced, how specifically can they “reemploy”? How can the “income distribution mechanism” be optimized to prevent wealth from becoming more concentrated?

These questions keep me up at night more than "Can AI conquer cancer?"

After all, regardless of how powerful AI becomes in the future, we must always leave ourselves a way out, right?