Blog

  • L’AI sotto controllo, le minacce fuori scala: il nuovo dilemma della cybersecurity

    Giganti come Anthropic e OpenAI stanno gestendo le loro capacità cyber avanzate come un “livello protetto”, implementandole con accesso ristretto e roll-out estremamente controllati. Sembra che il nodo di forza sia la governance e la cautela.

    Tuttavia, il vero pericolo non risiede nella qualità o nella potenza dei modelli stessi. La vera emergenza è la velocità.

    L’allarme è lanciato dal fatto che il ciclo di scoperta di vulnerabilità, di exploit e di patch sta accelerando su tempi sempre meno “umani”. Le aziende si stanno concentrando su come limitare i sintomi (i modelli potenti), ignorando il rischio sottostante: un ritmo di attacco e difesa che sta diventando insostenibile per la sicurezza globale.

    In sintesi: mentre l’AI viene incanalata e regolamentata, il campo di battaglia cibernetico sta vivendo un’escalation di velocità che supera ogni misura di controllo. È un ritardo critico tra lo sviluppo tecnologico e la preparazione delle difese.

    (Fonte: https://www.cybersecurity360.it/nuove-minacce/apocalissi-cyber-in-arrivo-a-causa-dellai-che-dicono-le-mosse-anthropic-e-openai/)

  • IA Agentica & cyber security

    L’intelligenza artificiale agentica sta inaugurando una fase di profonda rivoluzione nel campo della cybersecurity. Non è solo un aggiornamento tecnologico, ma un vero e proprio cambiamento di paradigma che ci obbliga a riconsiderare interamente il modo in cui pensiamo alla sicurezza.

    Questa nuova ondata di IA, estremamente potente, rappresenta una lama a doppio taglio. Da un lato, emerge una sfida cruciale: è fondamentale sviluppare strategie per proteggere l’IA stessa, rendendola un bersaglio di nuove minacce. Dall’altro lato, l’IA non è solo il problema, ma anche la soluzione. È essa che detiene le chiavi per affrontare le complessità che l’umanità sta creando.

    Dunque, il messaggio chiave è chiaro: la difesa cyber del futuro non può più essere solo umana; deve essere potenziata e guidata dall’intelligenza artificiale stessa. Stiamo assistendo a un ciclo virtuoso—e potenzialmente pericoloso—dove la tecnologia che ci espone è anche quella che ci salverà.

    (Fonte: https://www.cybersecurity360.it/outlook/ia-agentica-cyber-security-a-che-punto-siamo-e-cosa-ci-attende/)

  • Claude “Mythos”: Il Modello Anthropic “Troppo Potente” Svelato da un Errore

    Claude “Mythos”, il nuovo modello di Anthropic. Definito dalla stessa azienda come “troppo potente per essere reso pubblico”, Mythos promette di ridefinire i confini del ragionamento artificiale e della cybersicurezza.

    Secondo le dichiarazioni ufficiali, Claude Mythos rappresenta un vero e proprio “salto di livello” rispetto ai modelli precedenti con prestazioni eccezionali in tre aree critiche:

    1. Ragionamento logico complesso

    2. Programmazione avanzata

    3. Cybersicurezza

    Il dettaglio che ha destato più scalpore tra gli esperti è la capacità di “recursive self-fixing” (autocorrezione ricorsiva). Mythos non si limita a scrivere codice, ma è in grado di identificare autonomamente le vulnerabilità nel proprio output e correggerle in un ciclo continuo di miglioramento.

    Questa funzione mira a rendere i software sempre più automatizzati e sicuri, ma solleva un dilemma etico: se un’IA è così brava a trovare falle per ripararle, quanto potrebbe essere pericolosa se utilizzata per scopi malevoli? La capacità di individuare vulnerabilità “zero-day” in pochi secondi potrebbe fornire ai cybercriminali un’arma senza precedenti.

    Proprio per questi rischi, Anthropic ha deciso di non rilasciare Mythos al grande pubblico. Il modello è attualmente disponibile in versione Preview solo per un gruppo ristretto di circa 40 aziende strategiche, tra cui giganti come Apple, Amazon e Microsoft.

    L’obiettivo è duplice:

    • Utilizzare la potenza di Mythos per mettere in sicurezza infrastrutture critiche e codice open-source.

    • Dare un vantaggio difensivo ai “soggetti ben disposti” prima che strumenti simili finiscano nelle mani sbagliate.

  • NIS2 e cyber security aziendale: le nuove sfide per Board e IT tra conformità e rischi reali

    NIS2: Il Cyber Rischio non è più solo un problema IT

    L’implementazione della direttiva NIS2 sta costringendo le aziende italiane a un vero e proprio cambio di paradigma nella gestione del rischio cyber. Il messaggio chiave è netto e ineludibile: il rischio informatico non è più un tema strettamente tecnico da delegare al reparto IT.

    Oggi, la gestione della sicurezza cibernetica è diventata una responsabilità diretta e strategica dei vertici societari e del Consiglio di Amministrazione. Le implicazioni sono serie, non solo per la conformità documentale, ma anche per il rischio di sanzioni.

    In sintesi, NIS2 eleva il cyber risk a livello di governance. Non è più sufficiente “risolvere problemi”; è necessario dimostrare, a livello dirigenziale, di aver integrato la resilienza digitale nel core business.

    (Fonte: Elaborazione su Cyber Security 360) (Fonte: https://www.cybersecurity360.it/legal/nis2-e-cybersecurity-aziendale-le-nuove-sfide-per-board-e-it-tra-conformita-e-rischi-reali/)

  • The Morphogenetic Organism

    A commentary on Jack Dorsey and Roelof Botha’s “From Hierarchy to Intelligence” (Block, March 2026)

    (https://block.xyz/inside/from-hierarchy-to-intelligence)

    When Jack Dorsey and Roelof Botha argue that Block is no longer trying to build a better company but to build a company as an intelligence, they are doing something more radical than proposing another management fad. They are closing a two-thousand-year-old parenthesis. Their essay traces a straight line from the Roman contubernium of eight soldiers and a mule to the modern corporation, and it correctly identifies the hidden constant across all those centuries: the human span of control. Every pyramid, every matrix, every squad-and-tribe experiment has been a variation on the same theme, because the fundamental unit of coordination was always a person who could reliably supervise only a handful of other persons. Hierarchy was never an ideology; it was a workaround for a biological limit. What Dorsey and Botha are saying, with admirable bluntness, is that the workaround has finally become optional. And once you accept that premise, the question is no longer how to flatten the pyramid or how to make managers more empathetic. The question is what kind of entity a company becomes when the coordination layer itself stops being human.

    The answer the Block essay gestures at, and the one I want to develop here, is that the company stops being a structure and starts being an organism. But not any organism: a morphogenetic one. Morphogenesis is the process by which a biological body grows its own form in response to signals from its environment, producing the organs it needs where and when it needs them, without a blueprint dictated from above. A morphogenetic company is one that can grow new capabilities, new interfaces, new decision loops, in the same reactive way a body grows tissue around a wound or a plant turns its leaves toward the light. This is, I think, the deeper implication of what Dorsey and Botha call a company world model plus a customer world model plus an intelligence layer. They describe it in the language of software architecture, which is appropriate for an investor letter, but the more honest metaphor is biological. They are not designing a better org chart; they are describing anatomy.

    Consider what their four-part decomposition actually proposes. Capabilities, in their framing, are the atomic financial primitives: payments, lending, card issuance, payroll. They insist these are not products and have no user interfaces of their own. In organic terms, capabilities are organs. A liver does not have a UI either; it has a function, a set of reliability and performance targets, and a metabolic role in a larger body. The world model, with its two faces turned inward and outward, is the nervous system: the continuously updated representation of self and environment that allows the organism to know, without a committee meeting, where it is, what it is doing, and what is happening to it. The intelligence layer that composes capabilities into moments is closer to the endocrine and reflex systems combined, the thing that decides, in the specific instant a restaurant’s cash flow tightens ahead of a seasonal dip, to release a loan the way a pancreas releases insulin. And the interfaces, Square and Cash App and the rest, are skin and sense organs, the surfaces where the organism touches the world and the world touches back. The traditional product roadmap, in which humans hypothesize what to build next, is replaced by something much closer to immune memory: the intelligence layer tries to compose a solution, fails because a capability does not yet exist, and that failure is itself the signal to grow the missing organ. The backlog is generated by reality, not by opinion.

    This is where the Block essay, to its credit, refuses the easy utopia. Dorsey and Botha do not claim that the humans disappear. They claim that the humans move to the edge, which is exactly the right word, and which deserves more attention than they give it. In a morphogenetic organism, the edge is not the periphery in the sense of being unimportant. The edge is the membrane, the place where the inside meets the outside, where new information enters and where the organism’s response is actually tested against the world. Skin is not a demotion from brain. The three roles Block normalizes down to, the individual contributor who builds and operates a layer of the system, the directly responsible individual who owns a cross-cutting problem with the authority to pull resources from anywhere, and the player-coach who combines craft with the development of other people, are all edge roles in this sense. They are the places where the system’s model of reality gets corrected by contact with reality itself. What has been removed is not humanity but the specific human function of information relay, the centurion’s job of aggregating reports from below and transmitting decisions from above. That job is gone because a continuously updated world model does it better, and once you admit this, the cascade is hard to stop: if the relay layer is unnecessary, then the layers above it that existed only to coordinate the relay layer are also unnecessary, and the entire pyramid starts to look like scaffolding around a building that has already been completed.

    What the Block piece does not dwell on, and what seems to me the most interesting unresolved question, is the problem of the immune system. An organism that moves at the speed Dorsey and Botha describe, with an intelligence layer composing financial instruments in real time on the basis of transaction signal, is an organism with a vastly expanded attack surface. In a hierarchical company, security was a checkpoint, a gate you passed through at defined moments, often annoyingly. In a morphogenetic company, security cannot be a checkpoint for the same reason that the nervous system cannot stop at a border control before sending a signal to the hand: any latency is a wound. The implication, which Block’s authors do not quite spell out but which follows directly from their architecture, is that cybersecurity must itself become an organ, integrated into every workflow at the cellular level, behaving more like white blood cells than like a firewall. Anomaly detection, agent identity verification, and continuous encryption become metabolic processes rather than policies. A company that can compose a loan in milliseconds must also be able to recognize and neutralize a hostile composition in milliseconds, and this is not a feature you bolt on afterward. It is a constraint on the entire design. An intelligence-first company that is not also immune-first is not fast; it is merely exposed.

    There is another absence in the Block essay worth naming, which is the question of what happens to judgment under compounding speed. Dorsey and Botha are careful to say that humans at the edge make the ethical calls, the novel calls, the high-stakes calls where the cost of being wrong is existential. I believe them, and I think they are right to draw that line. But the line is harder to hold than the essay suggests, because the intelligence layer’s ability to compose solutions will always be tempted to expand into exactly those domains where the composition looks, statistically, like a good idea but is, ethically, a bad one. A loan offered to a merchant at the perfect moment on the basis of transaction data is a beautiful piece of morphogenesis when the merchant needed the loan, and a form of algorithmic predation when the model has misread the situation. The edge, in other words, is not just where humans add intuition; it is where humans hold the line on the kinds of decisions that should remain illegible to the model on purpose. A morphogenetic company needs not only an immune system but something like a conscience, and conscience, unlike coordination, does not seem to be a function that generalizes well across training data. This is not a refutation of the Block thesis. It is a reminder that the thesis places a much heavier load on the human edge than the clean three-role taxonomy implies, and that player-coaches and DRIs will spend a great deal of their time doing work that has no name yet, somewhere between ethics, product design, and antibody production.

    Seen in this light, the Block essay is best read not as a manifesto but as a field report from the first serious attempt to build a company whose form is grown rather than drawn. The Roman legion was drawn. The railroad org chart that Daniel McCallum produced in the 1850s was drawn. The matrix that McKinsey helped Shell and GE implement was drawn. Even Spotify’s squads and Valve’s flat experiments were drawn, which is why they reverted under scale: a drawing cannot grow, it can only be redrawn, and redrawing is always slower than the environment it is trying to match. What Dorsey and Botha are describing is the first coordination technology in two thousand years that can grow instead of being drawn, and the reason it can grow is that the world model underneath it is updated continuously by the honest signal of money moving between real people and real merchants. The signal is what makes the morphogenesis possible. Without it, you have an AI copilot stapled to a pyramid, which is what most companies are currently building and which Dorsey and Botha correctly dismiss as a cost optimization story with a predictable ending.

    The harder and more interesting claim buried in their piece is that the company of the future will be defined not by what it produces but by what it understands, and that this understanding must get deeper every day or the company is already dead. This is a biological claim, not a managerial one. Organisms that stop learning about their environment do not stagnate; they are eaten. A morphogenetic company is one whose understanding of its customers and of itself compounds with every transaction, and whose form reorganizes itself around that understanding without waiting for a quarterly planning cycle. The role of the people inside it is to stand at the edge, to reach into the places the model cannot yet see, to make the calls the model should not make alone, and to grow the organs the model discovers it is missing. It is, to borrow the vocabulary of the essay one last time, a genuine inversion: the intelligence used to live in the people and the hierarchy routed it; now the intelligence lives in the system and the people do the things intelligence alone cannot do. Whether Block succeeds at this specific implementation is almost beside the point. The question Dorsey and Botha have put on the table, and that every other company will now have to answer, is whether your organization is still being drawn or whether it has begun, at last, to grow.

  • My Bio

    My Bio

    Alessio Antolini boasts thirty years of experience in IT, having previously served as a Senior Consultant for Telecommunications Companies in the fields of lawful interception and regulatory compliance. He has worked as a Cybersecurity Manager within digital transformation programs for leading telecommunications players both in Italy and abroad.

    From 2019, he held the position of CISO.