Large AI companies do not support trading bots for crypto. None of the leading labs train models for such tasks.

Nevertheless, more and more traders are using Claude from Anthropic to build bots for Polymarket. Some claim to make millions of dollars. Viral threads create the impression that anyone can replicate this.

But there's a catch. The loudest cases are most often based on strategies that any quant fund can replicate literally overnight.

Three assumptions and not a single guarantee

This whole story is based on three things. First, that large tech companies will sooner or later start building models specifically for trading. Second, that ordinary traders will be able to outplay institutions for a long time. And third, that autonomous AI agents are capable of earning consistently in open markets.

Hasib Kureshi, managing partner of Dragonfly Capital, disagrees with this.

In an interview with Bankless, he addresses all three points at once. According to him, the problem lies in the risks of responsibility, in the very structure of the market, and in the fact that AI is quickly becoming something like a mass tool. As a result, this "golden vein" looks much less attractive than it seems at first glance.

The trap of responsibility

According to Kureshi, from a technical point of view, making AI for blockchain tasks is not a problem. For example, an EVM simulator easily runs complex scenarios, whether it is cyclic lending or token exchange. The models themselves already know how to do everything. They just haven't been directed towards crypto yet.

The reason here is not in the technologies.

First of all, crypto still has a controversial reputation, and large AI labs do not want to get involved. "Crypto looks a bit cringy right now," Kureshi said.

But the main stop-factor is responsibility.

The problem level of Jane Street

Even without the involvement of large AI companies, this story has a serious limitation. Any strategy based on a public model is essentially available to everyone, including large quant funds.

The essence of Kureshi's argument is quite simple. If an ordinary bot on Claude can find profitable trades in Polymarket, then Jane Street can launch thousands of such bots simultaneously.

They have faster infrastructure. More money. And they can squeeze any working strategy to zero even before the retail trader has time to enter the market.

"If it's in the base model, then Jane Street is already doing it," he said.

The only chance for retail is to find unconventional signals that are not in the model itself. Just connecting Claude to the API is not enough for that. Imagine if Claude makes a mistake in a leveraged trade and loses $2 million. Or accidentally sends $10,000 to the wrong address. No disclaimers will save from user reactions.

"This will definitely happen. Any mistake will spread instantly and become viral," he noted.

He compared managing a crypto wallet through AI to injections of untested drugs. The risk is disproportionate to the potential profit. An error in the code is just awkward. Lost money is already a lawsuit.

At the same time, Anthropic is already exploring the intersection of AI and blockchain. In their work, SCONE-bench checked how models find vulnerabilities in smart contracts. But this is specifically research in the field of security, not a product.

A turning point, in his opinion, will happen due to competition. As soon as one of the companies decides that the crypto market is too important to ignore, the race will begin. For now, everyone prefers to stay on the sidelines.

Why the scheme of "just making money" does not work

Kureshi expands this argument and goes beyond trading. It is already about the idea that AI agents will be able to earn on their own.

The first option is work. That is, the agent sells its labor. But here everything hinges on economics. There are millions of such models. They have no unique skills or advantages. Essentially, hiring AI is the same as purchasing computations from Anthropic, only through an extra step. No one will pay more than it costs to access the API.

The second option is launching a business. It sounds logical, but there is a problem here too. All AIs take ideas from the same data. As a result, they come to the same solutions. If you ask ten Claude models about a startup, they will suggest almost the same thing.

And a real business is built differently. Kureshi refers to Peter Thiel's concept of "earned secrets."

These are insights that come from personal experience. Specific time, place, context. For example, Bankless did not shoot by chance. The founders had the right background, market understanding, and sense of community. And all this coincided in time.

AI does not have such experience. It simply has no way to acquire these "secrets." This leads to an unpleasant conclusion. AI cannot earn consistently in trading. It cannot compete in the labor market. And it does not create truly original business ideas.

Then the question arises. What is its real advantage? Kureshi's answer sounds provocative. Crime. He does not say that this is good. But if you remove all restrictions and rules, the logic leads exactly there.

What does it all mean

Traders who create bots for Polymarket do indeed exist. And some of them really make money. At least for now. But such opportunities do not last long. Quant funds quickly eliminate any profitable inefficiency if it exists in the base model.

Large AI companies are also in no hurry to enter crypto. Until competition pushes them, nothing will change. And the whole idea of autonomous AI agents may ultimately go in a completely different direction. One where there is less control.

For an ordinary trader, the conclusion is quite simple. When the news talks about bots making millions, it is important to understand one thing. The market is always stronger. In the case of AI, this is especially noticeable. Systems with thousands of bots and speed at the millisecond level are playing against you.

#AI #AImodel #Traiding #Write2Earn #BinanceSquare

$BTC

BTC
BTC
66,603.34
+0.30%