Artificial intelligence is now being used across professional work for drafting, research, triage, knowledge support and customer interaction. The serious question is no longer whether AI can be useful. It is whether it is being used in a way that is reliable enough for real professional settings, where speed matters but judgement matters more. Recent research suggests the answer is yes, but only when AI is used within clear limits, with good source grounding, and with active human oversight.
That matters in planning. Planning work is document-heavy, rules-heavy and context-heavy. A system can sound polished while still missing a local requirement, skipping a supporting document dependency, or giving a generic answer that is not fit for the actual submission in front of the user. In that kind of environment, fluent output is not enough. Professional value comes from relevance, traceability and review. That is exactly where AskArchi has been designed to help.
What research says about AI in professional work
One of the clearest studies on AI in professional work comes from a large field experiment involving more than 5,000 customer support agents. Researchers found that access to a generative AI assistant increased productivity by around 14% on average, with the biggest gains among newer and lower-skilled workers. The authors also found evidence that AI helped spread the practices of stronger workers across the workforce, which is a useful clue about where these systems can add value in business settings: they can help raise the floor when work involves repeatable knowledge tasks and structured judgement.
AI works best when the task fits the model
Another influential study looked at 758 consultants completing realistic knowledge-work tasks. When the task was inside the model’s effective capability range, consultants using AI completed 12.2% more tasks, finished them 25.1% faster, and produced work rated more than 40% higher in quality. But when the task fell outside that range, performance dropped. The researchers called this the “jagged technological frontier”. That phrase matters because it captures the central risk of professional AI use: two tasks can look similar on the surface while falling on very different sides of what the model can actually do well.
Why human oversight still matters
That is where a lot of bad AI deployment goes wrong. The problem is not only that AI can be wrong. The problem is that it can be wrong in a neat, plausible, fast-sounding way. Research on human-in-the-loop systems shows that incorrect AI advice can reduce human accuracy, especially when the AI recommendation appears before the human forms their own judgement. In other words, simply having a person somewhere in the process is not enough. The workflow itself has to be designed so that the human remains alert, critical and accountable.
That concern lines up with broader research on automation bias, which describes the tendency to over-rely on automated systems. A long-standing review in decision-support settings found that while automation can improve overall performance, it can also introduce new errors when users defer to the system too readily. More recent research has gone further and found that biased AI recommendations can influence later human decisions even after the AI is gone. That is a useful warning for any business using AI in professional workflows: bad outputs do not just create one-off mistakes, they can quietly train bad habits.
So what does responsible use look like in a planning context?
At UK Planning Gateway, we do not treat AI as a substitute for professional judgement. We treat it as a support layer. AskArchi is designed to help users move through complex planning information more efficiently, but without pretending that planning is just a generic text-generation task. It is not. Real planning work depends on live requirements, local nuance, document interplay, and submission experience.
Why AskArchi uses live public source data
That is why AskArchi is trained on real live public source data, not left to improvise from generic model memory alone. This is an important distinction. Research on retrieval-augmented generation, often shortened to RAG, shows why grounding matters. The core idea is simple: large language models become more reliable in knowledge-intensive domains when they can draw on external data sources, especially where accuracy, recency and domain specificity matter. Survey research in this area describes RAG as a way to improve factual accuracy, credibility and access to current domain-specific information, while also recognising that these systems still need proper evaluation and governance.
That principle is central to how we think about AskArchi. A planning assistant should not guess. It should work from live public planning source material and operate within a framework shaped by real submission practice. That means the system is designed to support answers with planning-relevant source grounding, while our experience in preparing and managing planning submissions informs how those answers are structured, framed and checked. In short,
Planning expertise comes first, and the AI is there to support it.
We also use a human-in-the-loop approach because professional planning work should not be left to automated confidence theatre. Human review is not there as window dressing. It is there because the research is clear that AI can help and mislead at the same time. A responsible workflow needs subject knowledge, critical review, and a clear understanding of where the tool adds value and where it should stop.
Common risks of AI in planning workflows
In practice, that means we are mindful of several common pitfalls that affect AI use in business.
First, there is the risk of confident inaccuracy. A polished answer can still be wrong. AskArchi reduces that risk by grounding responses in live public source material rather than relying only on generic model recall.
Second, there is the risk of over-reliance. If users are encouraged to accept AI output passively, quality can fall rather than rise. AskArchi is positioned as a support tool within a reviewed planning workflow, not as an autonomous authority.
Third, there is the risk of generic output masquerading as professional advice. Planning is local, procedural and evidence-sensitive. AskArchi is designed around real planning information and real planning use cases, rather than broad consumer-grade prompting.
Fourth, there is the risk of poor governance and weak transparency. The EU AI Act takes a risk-based approach to AI and includes transparency obligations for certain systems that interact directly with people or generate synthetic content. The wider direction of travel is clear: users should know when AI is in play, and providers should think seriously about trust, oversight and the risk of deception.
How AI should support professional judgement
This is why we think the best professional use of AI is not “AI first”. It is evidence first, workflow first, oversight first. AI can speed up access to information, reduce repetitive admin, and help users move through complexity more confidently. But in planning, usefulness only counts if it is tied to the real requirements of the job.
That is the case for AskArchi. It is built to make planning information more usable without pretending that speed is the same as correctness. It is trained on live public source data. It sits within a human-reviewed workflow. And it is shaped by practical experience of how planning submissions actually work, where the risks usually sit, and what professionals need from a tool they can rely on.
The result is a more serious use of AI for planning work. Not AI as a gimmick. Not AI as an untouchable black box. And not AI as a replacement for judgement. AskArchi is designed as a professional support tool for a professional process, which is exactly where AI is most useful when it is implemented properly.
If you want to see how that works in practice, explore AskArchi and see how UK Planning Gateway is applying AI to real planning workflows with live public source data, human oversight and a clearer standard of professional support.