The AI era is creating a divide: the rich get richer, and the poor get poorer.
AI has changed our habits, and this is already a fact.
Using AI to write emails, making PPTs with AI, searching for information with AI, and even letting AI write our social media posts. We have become accustomed to the presence of AI, just like we naturally became used to WiFi.
But very few people stop to think about one question: Is the AI you use the same as the AI others use?
The 'fairness' of the AI era is the greatest illusion.
Silicon Valley likes to tell a story: AI has given everyone a super assistant, and knowledge is no longer the privilege of a few, but equal for all.
It sounds beautiful. But the truth is—AI is fundamentally unfair; it is a competition of financial power.
From chips to computing power, from model training to token consumption, every link in AI is burning money.
An NVIDIA H100 chip costs over $25,000. Training a GPT-4 level model costs over 100 million dollars. Every time you ask AI a question, tokens are burning in the background—tokens that have a price.
Claude Opus costs $5 per million tokens input and $25 output. ChatGPT Pro costs $200/month. Plus Perplexity, Cursor, Midjourney… a heavy AI user easily spends over $500 a month on tools.
Some people spend $5,000 a month using AI to build competitive barriers, while others using the free version of ChatGPT believe they are keeping up with the times.
This is not the same track. It's not even the same game.
At the national level: the structural gap can no longer be reversed.
This logic becomes even harsher at the national level.
The AI arms race requires three things: chips, computing power, and talent. All three require huge capital.
The United States controls over 70% of the world's AI computing power. China is catching up, but chip bans have choked its progress. As for most developing countries—among 46 emerging market nations, the cost of entry-level broadband accounts for 40% of monthly income.
When a young person in Nigeria considers stable internet access a luxury, how can we talk about 'AI equality'?
In high-income countries, 94% of people can access the internet, while in low-income countries, only 23% can. In high-income countries, 84% have 5G coverage, while in low-income countries, only 4% have it.
The starting line for third-world countries in the AI era is not just a step behind; they fundamentally have no qualification to compete.
This structural gap cannot be bridged by effort alone.
At the individual level: your ceiling is being redefined by AI.
The logic at the national level applies equally to individuals.
A sentence I wrote in my Twitter bio: an individual's ceiling = worldview + cognition + practical ability.
What has AI done about these three things?
▶️ First, AI has solved a large number of practical efficiency problems.
In the past, it took a week to create an industry report; now it can be done in a day. It used to take zero to start coding; now AI helps you set up the framework. In terms of efficiency, AI is indeed leveling the playing field.
▶️ But second, AI is greatly amplifying the cognitive gap.
With the same AI tool, what you ask, how you ask, and whether you can judge whether the answers AI gives are right or wrong—all of this completely depends on your original cognitive level.
A person with deep cognitive understanding uses Claude for research; they know what questions to ask, how to probe further, and which answers have gaps that need verification. AI saves them 80% of execution time, which they then use for deeper thinking.
And what about someone with shallow cognition? They throw questions at AI, and whatever AI provides, they accept. They hand over their thinking, delivering directly. Over time, they stop thinking. AI hasn’t made them smarter; it has made them lazier and dumber.
▶️ Third, the gap in delivery quality will continue to grow.
When you ask AI questions based on your existing understanding, the differences in depth, accuracy, and timeliness of what AI delivers are exponential. Using Claude Opus, one person produces deep insights, while another produces what seems to be nonsense.
A particularly interesting study from Finland's Aalto University found that the more one uses AI, the more likely they are to overestimate their abilities. AI makes you 'feel' stronger—outputs appear professional and smooth. But if you cannot discern good from bad, you are simply producing 'refined mediocrity.'
Thus, the gaps in worldview, cognition, and practical ability—these three dimensions are infinitely amplified in the AI era.
The smart become smarter, those with understanding deepen their understanding, and those with money use better tools to create greater distances. On the other end, people become lazier, shallower, and poorer with the 'help' of AI.
Cost × Cognition: The double gap is compounding.
Here is a logical chain that many people haven't figured out:
Money determines what level of AI you can use → the level of AI determines the quality and depth of information you can obtain → the quality of information determines your cognitive boundaries → cognitive boundaries determine the quality of your decisions → decision quality determines how much money you can make.
This is a closed loop. The rich will get richer, and the poor will get poorer.
The hallucination rate of the free version of ChatGPT is nearly 40%. This means that if you ask it 10 questions, 4 of the answers are fabricated. The paid version GPT-4 has a hallucination rate of 28%, and the latest version has dropped to 45%.
The decisions you make with the free version and those made with Opus will lead to two completely different life trajectories over time.
There will always be a huge information gap in this world. AI has not eliminated this gap; it has turned it into a paywall.
Those who can bypass restrictions and those who cannot have already become two different worlds.
Let me share a personal observation that I find quite poignant.
The reason you can see this article is likely because you can bypass restrictions and browse on Twitter.
But think back—how many people around you cannot bypass restrictions? When you chat with them, don’t you clearly feel that the level of understanding is not the same?
This is not an IQ gap. This is a long-term cognitive differentiation caused by the information environment.
One person is exposed to the world's cutting-edge information, in-depth discussions, and high-quality content creators every day. Another person sees algorithm-fed short videos and filtered information streams daily.
After five or ten years, the way these two individuals think, their judgment skills, and their worldview have become completely different.
The AI era has magnified this gap even further. Those who can bypass restrictions use Claude, use Perplexity, use the best AI tools in the world. Those who cannot bypass restrictions—ChatGPT is blocked in China, Claude is blocked in China—can only use localized alternatives or acquire them at a markup through intermediaries.
In the AI era, the 'walls' are not just physical firewalls. There are language barriers—the optimization of cutting-edge AI models for English far exceeds that for other languages. There are paywalls. There are algorithmic echo chambers. Each wall divides people into different worlds.
Research from Stanford University shows that non-English users consume five times the token amount for the same content when using AI. In other words, you spend the same amount of money but receive less information and of lower quality.
The scariest thing: you have already fallen behind, but you don’t know it.
This is the most important point I want to make in this entire article.
The free version of AI can also answer questions. It can help you write things. It can help you search. Therefore, those using the free version feel—'I’m using AI too; I’m not falling behind.'
But the free version's reasoning is shallower, has more hallucinations, and contains older information. The answers you get 'seem' correct, but are actually filled with seemingly plausible errors.
It's like two people are both 'running.' One is genuinely running forward, while the other is running in place on a treadmill. Both feel like they are running, but only one is making progress.
In psychology, there’s a concept called the Dunning-Kruger effect: the less you know, the more you think you know. AI has amplified this effect tenfold— the more you rely on AI, the stronger you feel. But you are no longer capable of independent thought; you just don’t realize it.
This is the cruelest aspect of the AI era.
It's not that AI will replace you. It's that those who use better AI and have deeper understanding will leave you far behind. And you might not even realize how you fell behind until the day you are eliminated.
