What LLMs actually do, what existed before them, and why most of the buzz is repackaging
If you have been paying attention to the technology conversation over the past two years, you could be forgiven for thinking that automation was invented sometime around late 2022. The arrival of large language models into mainstream consciousness brought with it an avalanche of content claiming that AI agents would now handle everything — scheduling, data processing, customer communication, document generation, system integration, you name it. What is almost never mentioned in that conversation is that virtually every category of task now being marketed as an LLM automation use case was already being automated, at scale, by enterprise software teams long before ChatGPT existed. The technology has genuinely evolved. The underlying problems being solved have not. And understanding that distinction matters enormously if you are a business trying to make sensible decisions about where AI actually adds value versus where it is simply a new coat of paint on old infrastructure.
The clearest example is robotic process automation, better known as RPA. RPA emerged in the 2000s — Blue Prism released their first product in 2003, and UiPath and Automation Anywhere released their automation libraries around the same time. These platforms did exactly what today’s AI workflow tools claim as their central value proposition: they automated repetitive, rule-based business processes across systems, replacing manual human effort with software that could log into applications, extract data, fill forms, and trigger downstream actions. RPA first began to be used in the early 2000s and was comprised of a combination of artificial intelligence, screen scraping, and workflow automation — with screen scraping itself having been in use since the 1990s to capture data from legacy applications. The bots were dumb by today’s standards. But they were automating. The notion that automation began with LLMs is simply not historically accurate.
“The platforms doing exactly what today’s AI workflow tools market as their core value — automating repetitive processes across systems — were live and in production at major enterprises twenty years ago.”
The same is true of data pipeline automation. ETL tools such as Informatica, Talend, and Microsoft SSIS emerged in the late 1990s and early 2000s as a major milestone in data management — providing solutions for data integration that reduced the need for custom scripting and manual intervention. Extract, transform, load — the process of pulling data from one system, restructuring it, and loading it into another — has been the backbone of enterprise data operations for decades. Scheduled jobs, event-triggered pipelines, conditional logic, error handling and retry mechanisms, cross-system data synchronisation: all of this existed, was documented, was productised, and was running in production environments at banks, retailers, healthcare systems, and logistics companies long before a single LLM was fine-tuned. What AI has changed is the interface layer and the ability to handle unstructured text — not the fundamental concept of moving and transforming data between systems automatically.
Where LLMs genuinely do add something new is in handling inputs that rule-based systems cannot. A traditional automation workflow breaks when it encounters something it was not explicitly programmed for — an email that does not match the expected format, a document with an unusual structure, a customer query that falls outside the decision tree. LLMs like GPT-4 and Claude are designed to understand and generate human language, making them ideal for tasks such as summarising legal documents or analysing large volumes of text data, and they excel in dynamic environments — able to plan and respond to inputs in context. That is a real capability improvement. The ability to interpret ambiguous, unstructured inputs and generate appropriate outputs is not something a 2003-era RPA bot could do. But it is worth being clear: this improvement sits on top of the same underlying automation infrastructure that has existed for twenty years. The LLM is the decision-making layer. The pipelines, the APIs, the databases, the orchestration logic — that is all the same plumbing.
The practical implication for businesses evaluating AI automation tools is this: most of what is being sold to you as an AI-powered breakthrough is a friendlier interface sitting on top of workflows that competent development teams have been building since the early 2000s. That does not make it without value — lower barriers to entry and more accessible tooling are genuinely useful. But it does mean the questions you should be asking are not “is this AI?” but rather “does this workflow solve a real problem, is it reliable under real conditions, and does the person who built it understand the underlying systems it is connecting?” A Make.com workflow with an LLM node in the middle is still a workflow. It will still break if the data model is wrong, the authentication is misconfigured, or the error handling is not thought through. The AI did not change those requirements. It just made the demo look more impressive.



