LC://_LucaCarbone
    Back to blog
    Educazione
    Nov 2, 20253 min read

    Dark AI: A New Nightmare

    Until yesterday, targeting a company, a bank, or a public administration required expert hackers, months of work, and sophisticated software. Not anymore. Today, one simply needs to pay through certain hidden circuits of the web and download an artificial intelligence model trained to do harm. Welcome to the era of Dark AI. An underground world, silent, but increasingly powerful. […]

    Dark AI: A New Nightmare
    1. Until yesterday, targeting a company, a bank, or a public administration required expert hackers, months of work, and sophisticated software. Not anymore. Today, it is enough to pay on certain hidden circuits of the web and download an artificial intelligence model trained to do evil.

    Welcome to the era of Dark AI. An underground world, silent, yet increasingly powerful.

    When artificial intelligence turns criminal

    Some names are already making cybersecurity agencies tremble. WormGPT, for example, is capable of writing tailor-made malware in seconds. It infiltrates systems, steals data, and evades controls. FraudGPT is designed to create perfect scam emails, indistinguishable from those of a colleague, a bank, or a supplier. No errors, no suspicion. Only damage.

    Then there is DarkBard, which takes all of this to another level: it leverages artificial intelligence to create voice and visual deepfakes in real-time. During a video call, it can replicate the face and voice of a real person, answer questions, and participate in meetings. And no one notices the scam.

    True stories, real consequences

    It sounds like science fiction, but it has already happened. Some criminal groups linked to North Korea used AI-written resumes to get hired by tech companies and steal confidential data. In Iran, an organization known as Charming Kitten launched personalized attacks, using chatbots to create extremely credible phishing messages.

    And in Europe, an incredible case occurred: a fake executive, generated by artificial intelligence, participated in a corporate meeting. He spoke, asked questions, gave opinions, and even voted. No one realized he wasn't real.

    It's not just a technical attack

    The real leap in quality lies here: it is no longer just a matter of code and firewalls, but of trust. If a voice on the phone, a face on Zoom, or a PDF document can be generated in seconds by a machine, what can we still consider authentic?

    Dark AI acts on a cognitive level. It doesn't just aim to enter systems, but to confuse people, to manipulate, and to deceive. The goal is not just stealing data, but sowing doubt. To make us doubt everything and everyone.

    What can we do?

    Technology doesn't stop, but awareness can protect us. We must learn to recognize the signals, update our defenses, but above all rethink the concept of digital trust.

    In the world that awaits us, the difference between true and false will no longer be obvious, but will be played out in the details. And learning to defend ourselves will be as necessary as learning to use the Internet twenty years ago.

    Share this article
    WhatsApp