In 2023, the world discovered ChatGPT. In 2024, every CEO wanted an AI strategy. In 2026, the landscape has crystallized into a sharp divide: on one side, those who built real capabilities and are reaping the rewards. On the other, everyone else, the 90%, who according to BCG are still observing or running limited experiments.
This article is not a list of novelties. You won't find a rundown of models released last quarter, nor enthusiastic predictions about products not yet available. What you'll find is an analysis of what has concretely changed in the past year, separating signal from noise. For each trend, one question: what does it mean for those who work?
AI agents: the big 2026 story between promise and reality
If I had to choose a single word to describe 2026 in AI, it would be "agents." Not the chatbots that answer questions: systems that act autonomously, make decisions, execute sequences of actions without someone pressing a button at every step.
Gartner listed Multiagent Systems among the top 10 strategic trends of 2026, calling it "one of the fastest transformations in enterprise technology since the birth of the public cloud." The prediction is precise: by the end of 2026, 40% of enterprise applications will have AI agents integrated into their core functions. In 2025, that percentage was below 5%. An eightfold increase in twelve months.
BCG confirms the direction with numbers: in 2025, agentic AI represented 17% of the total value generated by AI in organizations. The projection for 2028 is 29%. McKinsey, in its "Agents, Robots, and Us" report, goes further: AI technologies available today could technically automate 57% of working hours in the United States. AI agents alone cover 44% of that potential. In 2023, the estimate was 30% by 2030. It was not a conservative forecast: it was, evidently, a very conservative one.
But here's the flip side, the part that sales presentations omit. Gartner estimates that over 40% of ongoing agentic projects could be cancelled by 2027. Not due to technical problems, but to rising costs and unclear business value. The international AI incident database logged 346 incidents in 2025, nearly 50% more than the 233 recorded in 2024, continuing an accelerating trend.
40% of enterprise applications will have integrated AI agents by the end of 2026. Yet 40% of agentic projects risk cancellation by 2027. These two numbers don't contradict each other: they describe a market moving at full speed where the difference between success and failure depends on how you build, not what you buy.
Klarna deployed an AI customer service system equivalent to roughly 700 full-time agents, reducing its headcount from 5,500 to approximately 3,400 over three years. But cases like Klarna, the ones cited on every stage, are the exception, not the rule. The rule is that most organizations jump from demo to production without building the foundations: no governance, no widespread team competence, no specifications that the autonomous system can actually read and follow.
The difference comes down to investing in invisible infrastructure: not the software you purchase, but the ability to define what the agent must do, what it must not touch, and when it should stop and ask a human. It's what I call intention design in the course, and it's the territory where the real game of the next few years is being played.
The model market: less revolution, more engineering
Those following AI in 2024 expected the 2025 models to deliver a qualitative leap comparable to going from GPT-3.5 to GPT-4. It didn't happen. The latest-generation models are better than their predecessors, but not revolutionary. The improvement curve has flattened relative to expectations, and for the first time the industry had to reckon with an uncomfortable reality: raw model performance is no longer growing at the same pace.
In return, something more concrete changed. Inference costs collapsed. Epoch AI estimates the pace of algorithmic improvement at 400% per year: today's results can be achieved a year later using one-fourth of the compute. The original GPT-4, with roughly 1.8 trillion parameters, scored 67% on HumanEval, a coding benchmark. IBM Granite 3.3 2B, released two years later and 900 times smaller, scored 80.5%.
This cost decline is the real enabler of agents. Multi-agent systems coordinating multiple models on complex tasks would have been economically unsustainable at 2024 prices. At 2026 prices, they're starting to make sense.
The other significant evolution concerns reasoning models, the ones that "think" before responding. IBM Granite 3.2, then Claude 3.7, then Gemini 2.5 Flash introduced "hybrid reasoning": the ability to toggle thinking mode on or off depending on the task. You didn't need a different model for every situation. You needed a model that knew when to stop and think and when to answer immediately.
The real change of 2026 isn't a smarter model. It's that a model 900 times smaller than the original GPT-4 outperforms it on a coding benchmark. AI isn't just improving in quality: it's improving in accessibility, and that changes everything.
Then came the DeepSeek shockwave, the Chinese lab that demonstrated how a Mixture of Experts model could reach frontier performance at a fraction of the cost. The impact wasn't purely commercial: it revitalized an architecture the industry had underestimated, and today Meta Llama 4, Alibaba Qwen3, and IBM Granite 4.0 have all adopted variants of the same approach. The consolidation isn't about who builds the biggest model. It's about who makes it most efficient.
Enterprise adoption: the numbers behind the declarations
Here the picture becomes less flattering. BCG surveyed 1,400 C-suite executives across 50 markets and 14 industries. 89% declare that AI and generative AI are among their top three technology priorities. 85% plan to increase investment. 54% expect concrete savings, and among those, half anticipate savings exceeding 10%.
So far, enthusiasm. Then come the real numbers. 78% of organizations use AI in at least one function. But only 5%, those BCG calls "future-built," generate AI value at scale. 60% see no material returns despite real investment. Two-thirds of executives believe it will take at least two more years for AI to move beyond the inflated expectations. 71% are limited to small-scale pilots.
The gap between those who invest and those who extract value is enormous. Organizations in the top 5% achieve 1.6x operating margin and 3.6x shareholder return over three years compared to laggards. The difference isn't in budget, but in five characteristics BCG isolated: they invest in both productivity and revenue growth, conduct systematic training, monitor AI usage costs, build strategic partnerships, and implement responsible governance.
On the training front, the numbers are discouraging. Only 6% of companies have trained more than 25% of their workforce on AI tools. 59% of executives admit to having little or no confidence in their executive team's AI competence. 45% report having no guidelines or restrictions on AI use at work yet. In a context where agentic AI is about to enter the applications employees use every day, this lack of preparation isn't a future risk: it's a present one.
The gap: 78% of companies use AI. 5% extract real value. 60% see no returns. If the technology were the problem, nobody would succeed. Few succeed because the problem is organizational, not technological.
The AI Act: the first year of real rules
The European Union's AI Act entered into force in August 2024, with phased implementation. On February 2, 2025, the first concrete ban took effect: AI systems posing unacceptable risk are prohibited, from cognitive manipulation to social scoring, from biometric categorization to real-time facial recognition in public spaces (with limited exceptions for law enforcement).
In August 2025, transparency rules for general-purpose models (those like GPT-4, Claude, Gemini) came into effect: mandatory disclosure that content is AI-generated, obligation to design models to prevent illegal content generation, and requirement to publish summaries of copyrighted data used for training. High-impact models, those that could pose systemic risk, must undergo thorough evaluations.
Rules for high-risk systems, those used in healthcare, education, hiring, critical infrastructure, will take effect in August 2027. For businesses, this means preparation time isn't "a few years." It's now.
For SMEs, the picture is mixed. The AI Act mandates regulatory sandboxes provided by national authorities, designed to let smaller firms develop and test AI systems under controlled conditions before public release. But the reality is that most SMEs don't yet have an AI lead, nor an internal policy on employee AI use (BCG's figure: 45% without guidance applies across all company sizes). Preparing for compliance requires first knowing which compliance applies to you, and that requires a baseline understanding of what AI does and how it's used, well before getting into regulatory specifics.
The AI Act is not a document to read "when the time comes." The time is now. Bans on unacceptable risks have been in force since February 2025. Transparency rules started in August 2025. High-risk systems have until 2027. For everything else, the clock has already run out.
The job market: who gets replaced and who becomes irreplaceable
The World Economic Forum published its "Future of Jobs" report in January 2025, with data from over 1,000 major employers representing more than 14 million workers across 55 economies. The central finding is unsurprising: AI, alongside broader technological change, is among the main drivers of labor market transformation through 2030.
But the numbers behind that conclusion tell a more nuanced story than what circulates in alarmist headlines. Bloomberg reports that middle managers account for more than 30% of all white-collar layoffs, a proportion far exceeding their weight in the workforce. Korn Ferry, surveying 15,000 professionals, found that 41% say their organization has already reduced intermediate management layers. The role of those who gather information, synthesize it into reports, and distribute it to decision-makers is being compressed by the very technology that once made them the organizational glue.
In Italy, the landscape has its own characteristics. Only 8% of Italian SMEs have an active AI project, according to available data. 56% of Italian companies improvise AI training without a structured program. Yet job postings requiring AI skills have grown by 93% in one year. The demand exists; the preparation does not.
The McKinsey "Agents, Robots, and Us" report adds a dimension worth noting: it's not just repetitive work being affected. Activities requiring standardized operational judgment, the kind a middle manager performed by following established procedures and aggregated data, are precisely those that an AI agent can handle with increasing effectiveness. What remains beyond automation's reach is contextual judgment, negotiation among divergent interests, the ability to build trust, strategic vision: skills that cannot be delegated but must be developed.
George C. Lee of Goldman Sachs coined a name for the emerging competency: "machine capital management," the ability to direct ecosystems of AI agents, not just human teams. A manager in 2026 doesn't just manage people. They manage workflows where some activities are executed by automated agents and others by human collaborators. Value is generated from the integration between the two.
The 2026 paradox: job postings with AI skills requirements grow by 93%. AI training in Italian companies remains improvised at 56%. Those closing this gap now are positioning themselves on the right side of the transformation.
What all of this means for you
The numbers tell a coherent story, even if an uncomfortable one. AI works, but most organizations fail to extract value because they layer the technology onto processes, skills, and mindsets that aren't ready. AI agents are entering the applications you use every day, whether you know it or not. European regulation is already active, not incoming. And the job market isn't waiting for everyone to catch up.
The answer to all of this isn't technical. It's not buying a subscription to a tool, installing a plugin, or attending a one-hour webinar. The answer is building a structured understanding: knowing what AI can and cannot do, how to communicate with it effectively, how to measure its value, how to manage a team that uses it, how to govern agents that act autonomously.
If you want to build that understanding rather than just chasing the latest news, From User to Orchestrator starts from fundamentals and goes all the way to autonomous AI agents. Eight modules covering everything: from prompt structure to agent governance, from ROI calculation to intention design. Not a glossary of terms. A path that takes you from casual use to conscious direction.
