Natasha: When AI is "effectively too human"

For years, we’ve been told the same promise: thanks to artificial intelligence, you’ll be able to create an app in minutes without coding, almost like ordering a pizza. Among the symbol companies of this dream was Builder.ai, a London-based startup presented as a magical platform for building software automatically. Major investors, including Microsoft and the Qatar sovereign wealth fund, put over $445 million on the table, convinced they were betting on "cutting-edge" technology.
Then came the wake-up call. Journalistic investigations, internal testimonies, and independent audits have revealed a very different picture. Behind the virtual assistant "Natasha," presented as an artificial brain capable of generating code, worked a human workforce of about 700 developers in India. Not a marginal support role, but the true engine of production. The apps weren't born from brilliant algorithms, but from hours of manual labor, sold nonetheless as the product of an automated platform.
Added to this are the numbers. According to subsequent audits, the company allegedly inflated revenues by up to 300 percent, declaring tens of millions in revenue that never existed. One creditor, Viola Credit, seized $37 million from the accounts. Very little remained, and those funds were also frozen. In 2025, Builder.ai headed toward insolvency, with thousands of jobs at risk and ongoing proceedings on multiple fronts.
The paradox is evident. A company presented as the spearhead of AI ends up becoming a textbook example of "AI washing"—attaching the "artificial intelligence" label to a traditional service to attract customers, visibility, and capital. The product is no longer what's important. It's the narrative that counts. And if someone tries to ask questions, they rely on shiny slides, slogans, and curated case studies.
Why does this story also concern you, who use AI every day to write, translate, summarize documents, or prepare presentations? Because it shows how easy it is to be dazzled by marketing when talking about new technologies. Today, simply tucking "AI" into the name of a service is enough to add zeros to valuations, attract funds, and convince many users that there is something almost magical behind it.
The question to ask, instead, is very simple: what does this tool actually do, in a verifiable way? If a company promises to build a complex app for you in a few hours, how realistic is that? If they say they use advanced algorithms, do they explain—at least in understandable terms—where the automated pieces are and where human labor intervenes? If they can't answer, an alarm bell should ring loud and clear.
On a smaller scale, the same attention is needed when you choose "AI" services for your business or company. Before entrusting sensitive data, money, or critical processes to a platform, it makes sense to perform three basic checks: ask for concrete examples, verify real references, and calmly read the terms of use. If you find vague phrases, miraculous numbers, and very little transparency, it's better to slow down.
The Builder.ai case doesn't prove that artificial intelligence is a scam. It proves the opposite. The technology exists, it works, and in many cases, it brings real value. Precisely for this reason, it is attractive to those who aim to ride the wave without having substance. Here lies the paradox: while some teams work seriously to improve useful tools, others limit themselves to using the same label to sell old services in new clothes.
The task today is to learn how to distinguish. You don't need to become an engineer. You need to cultivate a healthy skepticism, ask concrete questions, and not stop at the first glossy presentation. When in doubt, it’s better to trust those who admit the limits of their systems rather than those who promise miracles with a click. AI is not magic. When they try to sell it as such, that is where the paradox becomes dangerous.