AdobeStock/WS Studio 1985

An AI agent in logistics can be a force multiplier. But also a multiplier of problems

You can read this article in 20 minutes

Ignoring AI tools can have catastrophic consequences for companies’ efficiency and competitiveness — just as serious as rushed and forced implementations. 

The text you are reading has been translated using an automatic tool, which may lead to certain inaccuracies. Thank you for your understanding.

Artificial intelligence is already at the top of investment priorities, yet a huge share of projects still end in failure because companies lack a strategy and clear goals, are not technically ready, buy counterfeits, or give in to psychological market pressure. Far less is said about the fact that the technology itself is not perfect, it is very expensive, implementation is complex, and on top of that an AI agent hallucinates and invents a reality that does not exist.

Robotic process automation (RPA), i.e., algorithms that perform repetitive, rule-based tasks but lack the ability to learn. Then advanced predictive analytics and machine learning (ML), and finally generative artificial intelligence (GenAI), which shows a certain degree of creativity and can create new content based on simple language prompts. 

This, in very broad terms, is how one can describe the growth in autonomy and the successive layers of intelligence being added to different types of software. The last of the innovations mentioned turned out to be a major breakthrough and, according to IMF estimates, is characterized by a historically unprecedented adoption speed, reaching 100 million users in just a few months. This pace is significantly faster than for other general-purpose technologies such as the internet, cable television, or mobile telephony.

Today, everyone is talking about AI agents — a new approach to the architecture of artificial intelligence systems that connects individual layers and degrees of autonomy. From information published by the creators of such solutions, one might even get the impression that an AI agent is an almost perfect, versatile, and indispensable tool — that it is actually harder to point to things an agent cannot do than to those it performs with ease. The truth, however, turns out to be somewhat harsher than the manufacturers’ marketing claims.  

AI agent — what or who is it? 

An AI agent, in short, is an autonomous system of algorithms using advanced machine learning and natural language processing techniques that allow it to understand the context of its environment, learn from previous interactions, select tools, and independently make decisions that lead to achieving a defined goal. 

All of this can happen with human involvement reduced to a minimum. Unlike commonly used process automation, which operates based on precisely defined rules, an AI agent — by analyzing patterns from the real world, many systems, and multiple sources at the same time — can adapt to them and take dynamic actions or issue appropriate recommendations. 

Whether an agent will have the ability to learn and modify its own behavior based on previous experience depends on its type, but solutions that enable this already exist. 

AI agents can therefore learn their own operating policy, e.g., based on a system of rewards or penalties received from the environment. The general principle, however, is that the agent is given a clearly defined goal, and it bases its execution on data from the environment. 

It plans independently and carries out that plan, and if it has to, it performs operations in other systems and uses external tools — all the while striving to maximize efficiency. For a fuller understanding of this phenomenon, it is also worth adding that an AI agent is not a humanoid robot but software that is not meant to replace humans, but to enhance their actions in real time. 

Individual agents can work together, and the people supervising their work can always add context to the conclusions generated by such artificial intelligence systems. 

The potential of artificial intelligence was immediately recognized in supply chain operations

According to data published by Gartner, as many as 27% of supply chain leaders see investing in AI-based solutions as one of the three main factors ensuring gaining a competitive advantage or enabling them to eliminate an unfavorable competitive position. For 9% of respondents, it is the top objective. 

Similar conclusions are also found in an IDC report, according to which as early as 2024 advanced analytics and AI in the supply chain were the main investment priorities for the next three years. 

What real and useful roles do AI agents offer today in logistics processes? 

– The key to understanding what an AI agent is and what tasks it can perform is to clearly distinguish its function from that of a chatbot or an assistant that passively reacts to commands – explains Sławomir Rodak, R&D Director and Commercial Director at ID Logistics Polska. – An agent operates independently and must achieve the goal set by the human supervising it as quickly as possible. Only in this context can you see that the real potential of an intelligent algorithm, e.g., in contract logistics, lies above all in areas that require dynamic decision-making. 

A good example of a role for an agent, as Sławomir Rodak explains, is planning the work of people and equipment and continuously analyzing operational micro-decisions, such as assigning slots at loading bays, balancing inbound and outbound workload, or responding to unplanned shipments. 

– A multi-agent system representing different operational interests (e.g., Outbound, Inbound, Ramp, Guard, SLA) can efficiently arrive at an optimal decision using data processed by YMS, WMS, and TMS. This makes it possible to shorten lead time, avoid penalties, and improve KPIs without engaging additional staffing or equipment resources. It should be emphasized that although this technology still requires refinement, it is not just another fashionable trend, but a transfer of real market mechanisms and predictions to a level that humans are not able to perform manually at such a volume – he adds.

He also notes that implementations of this kind should be approached with proper preparation. 

– An agent will not operate effectively without a mature technology backbone in areas such as machine learning or computer vision. Solutions of this kind will only make sense when they run on reliable data and in an environment that enables them to make accurate and fast decisions – says Sławomir Rodak.

In his view, without the right foundation the agent itself will not change much in an organization and may even generate more problems than benefits. 

– Multiplication works both ways in this case. This is confirmed by numerous analyses in which the lack of a clear strategy, business goals, or proper input information is the reason for halting costly implementations – he adds.

The agent is capable, but it needs a solid foundation — and equally capable people

The proactive capability of agents, enabling faster and more accurate decisions, has its limitations — including those on the side of the organizations themselves. 

Data published by PwC in 2025 shows, for example, that for 37% of operations and supply chain executives, one of the three biggest challenges in successfully scaling AI solutions is the availability and quality of input data. 

For 42% it is the difficulty of integrating AI solutions with existing systems. This is not new, as similar problems could also be observed when implementing other technologies in the supply chain, such as cloud solutions, which 56% of companies declare using, or digital twins, used by 21% of enterprises. 

Among those who attempted to implement digital solutions but ultimately declared at least partial dissatisfaction with the investments made, the main reason was complex integration (47%), followed closely by data issues (44%). Limited capabilities of solution providers (35%) and internal staff competencies (32%) also ranked high. 

Dreaming of agents without your own data means a hard landing for ambitious assumptions

An exceptionally realistic approach to implementing agents is already being presented by a number of entities, including those operating in the supply chain environment. 

For example, FourKites argues that an AI agent without the right data will not be able to provide clean and meaningful answers to the tasks assigned to it. 

The difference between successful deployments and costly failures essentially comes down to data architecture and the work of agents as comprehensive control towers with real-time visibility into processes taking place at facilities, among suppliers, consignees, carriers, and customers. Without such an approach, this technology becomes just another alerting system that generates noise instead of value.

A similar logic is echoed, among others, by the World Economic Forum (WEF), pointing out that inconsistent, outdated, or unreliable data severely limits the effectiveness of artificial intelligence, and legacy systems create additional integration obstacles. Matters are further complicated by regulatory requirements, so organizations must prioritize preparing their data for use in AI. 

It is therefore crucial to have a clear strategy for collecting external information that combines traditional sources of operational data (inventory levels, tracking and location information), risk signals and supplier data (their condition, geopolitical events, financial risk), as well as unstructured data (emails, meeting notes, market news). 

The integrated set should create a comprehensive, real-time picture of the supply chain. According to WEF, such model platforms are still in their infancy, although the picture is somewhat brighter than the vision presented back in 2024, when the organization suggested it was very unlikely that the supply chain system would be fully automated in the near future. 

The main obstacle is the fact that it consists of dozens of industries and thousands of companies and is not able — or perhaps does not want — to integrate individual systems with the latest technologies. Full automation would also be slow and extremely capital-intensive, requiring cooperation that is difficult to achieve, especially in highly competitive areas such as maritime transport. 

In this context, Gartner is somewhat more optimistic, forecasting that by 2030 half of cross-functional supply chain management solutions — i.e., those connecting multiple departments and functions — may use intelligent agents making independent decisions across the entire ecosystem. 

Optimism is already spreading to senior leadership

A study by IBM Institute for Business Value (IBV) and Oxford Economics conducted among more than 300 chief supply chain officers (CSCOs) and chief operating officers (COOs) working in organizations implementing AI-based automation shows that they view AI agents as a business accelerator. 

As many as 62% of them realize that agents embedded in operational workflows speed up taking action, decision-making, creating recommendations, and communication. As the analysis indicates, as many as 53% of executives are implementing work automation using AI agents in some form, 22% are developing proof-of-concept versions, and 31% are already scaling them. 

Additionally, 70% of executives believe that as early as 2026, thanks to AI agents, their employees will be able to analyze data more accurately and support real-time optimization. Moreover, 76% say overall process efficiency will improve, and 57% expect AI agents to proactively create recommendations based on their knowledge. 

Gartner tempers these sentiments somewhat, arguing that while it may be possible for as many as 95% of data-driven decisions to be at least partially automated in such a short time, this still does not change the fact that only 10% of executives say their companies use AI strategically. 

Even fewer leaders (9%) declare that their organizations have a clearly defined AI vision, which is crucial for smooth implementation, operation, and financing of solutions. 

Developing a comprehensive strategy for using AI in the supply chain in a breakthrough way — i.e., one that enables gaining a competitive advantage — includes four key pillars: vision, value, risk, and implementation. Only assessing and being able to implement these factors reveals the true potential of using the new technology, not only in supply chain operations but also in other areas.

What if AI agents are not as versatile and intelligent as their creators claim? 

Data shows that with a young technology such as AI agents, the sectors that benefit first are those where the complexity of elements and their interdependence is relatively low. According to 3,412 executives from various industries surveyed by Gartner last year, the greatest scope for applying AI agents today is primarily in customer service departments (22%), with marketing departments far behind (12%). 

Next are sales and business applications (11% each), as well as IT operations (10%). Only in 9th place are supply chain-related functions (5%), and an even lower likelihood of handing competencies over to intelligent agents is currently seen in procurement (3%) and legal (2%).

The approach is rather conservative 

Further analyses also indicate that not all managers have unanimously and widely opened their companies and wallets to implement AI agent solutions. Significant investment in this area is currently declared by 19% of respondents, while 42% of executives take a conservative approach. As many as 31% opted for a wait-and-see strategy, and 8% have not made any investment at all. 

The market is already seeing a wave of failed implementations, and Gartner estimates that over 40% of AI agent-related projects will be canceled by 2027 due to runaway costs, unclear business value, and inadequate risk control. Additionally, the market shows a growing imbalance of AI supply over demand — the so-called AI adoption gap. Solutions and innovations proposed by creators are growing clearly faster than customers’ ability to apply them. 

According to analysts, this is only one of many factors that in the short term will lead to the collapse of many intelligent algorithm providers, followed by consolidation and market correction. This will happen after the heavily exploited media noise subsides, and among potential buyers the psychological fear of missing out on a new technology (FOMO – Fear Of Missing Out) fades, and fundamental economic principles prevail. 

In the longer term, specialized products related to AI agents will emerge that meet real customer expectations — this is the normal life cycle of any product and technology. It is not excluded, however, that before that happens, AI agents will become another speculative bubble if investments detach from the technology’s potential to generate real and proportionate economic value translating into specific business results.

Fraud fueled by agentic FOMO

Fear of being left behind and implementation rush have already led to serious pathologies, which in the future may discourage companies from attempting implementation. The market shows the FOMO phenomenon, but also so-called agentic washing — a falsification mechanism in which software vendors present existing products as AI agents. 

These include, for example, disguised virtual assistants, advanced RPA systems, and chatbots. Gartner estimates that globally only 130 creators out of literally thousands of software vendors advertising products as AI agents are genuine. Most applications pretending to be agents lack business value or ROI, because models have not yet reached maturity and the stage of truly autonomous execution of complex business goals. In short, the market has been flooded with knockoffs. 

The agentic bubble has its problems, but if it doesn’t burst, it won’t stop 

According to Gartner, in 2028 — after consolidation and greater maturity — at least 15% of daily business decisions will be made by autonomous agents, a significant increase compared to 2024, when the share was 0%. Additionally, 33% of business software applications will include AI agent algorithms, jumping from below 1% in 2024. 

For now, however, technology sector analysts recommend starting implementation of agent-based AI at this stage of development only when it delivers clear value and return on investment. It should also be remembered that integration with existing systems is technically complex, costly, and often disrupts the enterprise’s operations. 

The maturation period of agents will be very dynamic, but at the same time highly profitable. In the best-case scenario, it is expected that by 2035 AI based on them will generate around 30% — i.e., more than USD 450 billion — of revenues for companies producing business applications. 

That is far more than in 2025, when the share was 2%. New functionalities are also expected to become widespread year by year. In 2027, individual specialized agents will cooperate with one another to solve complex tasks, and this will account for one-third of deployments.  

A year later, they will create networks of ecosystems and respond dynamically in a changing environment. In 2029, we will enter a new dimension of collaboration between agents and humans, and at least 50% of knowledge workers will develop new skills needed to work with, manage, and even create agents to solve complex tasks. 

Before this futuristic scenario comes true, the AI agent must stop making things up

Yes, an AI agent makes things up — or rather hallucinates, as generated outputs that have little to do with reality are professionally referred to. Hallucinations are one of the more serious factors slowing AI development; they are widespread and, for now, difficult to eliminate. 

Google points out, for example, that erroneous results can be caused by an insufficient amount of training data, its skew, bias, or incorrect assumptions in the AI model. 

It may simply not understand information about the real world, physical properties, or certain facts, and if during training it learns incorrect patterns, this will lead to hallucinations in the form of outputs that are seemingly credible but in fact untrue or nonsensical. 

OpenAI explains that models hallucinate because standard training and evaluation procedures reward guessing rather than admitting ignorance. IBM, in turn, explains that model hallucinations are similar to how people recognize silhouettes or shapes in clouds, in a way filling in context for what they see. 

So far, the problem of AI model hallucinations has not been eliminated, and its scale is truly serious. This leads not only to generating false outputs, but also undermines trust in solution providers. 

According to IT department heads at companies employing at least 250 employees surveyed by Gartner in mid-2025, this is one of the main barriers preventing the implementation of fully autonomous agents. 

Only 19% of respondents showed high or complete trust in their vendor’s ability to provide protection against hallucinations. Much more — 26% — nevertheless considered that agents would have a transformative impact on productivity, and 53% believed their impact would be at least significant.

Also read