Explorers, Villagers, and Town Planners: Understanding the Generative AI Divide
by Dinis Cruz and ChatGPT Deep Research, 2025/06/10
Download LinkedIn Post Back to Development And Genai
Introduction¶
In the generative AI (GenAI) community of mid-2025, a puzzling divide has emerged. On one side stand the enthusiastic explorers – pioneers who see GenAI as a technology that can be used everywhere, who believe we’ll soon “vibe code” entire applications just by prompting chatbots, and who even predict that AI will replace many coders and workers in short order. On the other side are the cautious town planners – pragmatists who emphasize that “GenAI doesn’t really think”, who point out the rampant hype and the reality of AI hallucinations, and who remind us that today’s GenAI systems are far from ready to autonomously power mission-critical products without a lot of traditional engineering. Both camps include many brilliant individuals – and, interestingly, both camps are right in important ways. So why do they reach such drastically different conclusions about GenAI’s present and future?
This opinion piece (co-authored by Dinis Cruz and ChatGPT Deep Research) argues that Simon Wardley’s Wardley Mapping concept of Explorers, Villagers, and Town Planners (EVTP) offers a powerful framework to explain this divide. By understanding the distinct mindsets and contexts of these archetypes, we can see that the GenAI enthusiasts and skeptics are each viewing the technology from the perspective of different evolutionary phases. In the process, Wardley Maps’ EVTP framework can give us a bit of inner peace and Zen – illuminating why things happen the way they do and why individuals and companies behave in seemingly contradictory ways, especially when it comes to disruptive innovations like GenAI.
Wardley Maps and the EVTP Framework¶
Wardley Maps, developed by Simon Wardley, are a strategic visualization tool that describe how technologies and practices evolve from novel ideas into mature commodities. A key insight from Wardley Mapping is that different phases of evolution demand different approaches and attitudes. Wardley identified three core archetypes of people or teams needed in an evolving system: Explorers, Villagers, and Town Planners. (These were formerly termed “Pioneers, Settlers, and Town Planners”, but Wardley now prefers the EVTP terminology to avoid colonial connotations.) Each archetype excels in a particular context:
-
Explorers thrive in the Genesis and Custom-Build phases of evolution – when something is brand new, unproven, and full of uncertainty. They are brilliant at venturing into the unknown and exploring never-before-discovered concepts. Explorers embrace ambiguity and “show you wonder” with novel, crazy ideas that sometimes look like magic. Crucially, they fail a lot – and that’s expected. In an exploratory phase, failure is not just accepted but encouraged as a way to learn. “Half the time the thing doesn’t work properly,” Wardley notes about Explorers’ creations. You wouldn’t trust an Explorer’s prototype in a safety-critical setting, but their risky experiments make future success possible by discovering what might work. Explorers are comfortable with chaos, move fast, and break things; they see failure as progress and celebrate what does work as a glimpse of the possible.
-
Villagers shine in the Product phase – when an idea has shown promise and needs to be turned into a reliable solution for a broader audience. Villagers (analogous to “Settlers”) are the bridge between the wild prototypes and a polished product. They “turn the half-baked thing into something useful for a larger audience”, building trust and smoothing out rough edges. Villagers excel at taking a cool demo from the Explorers and making it commercially viable: they add necessary features, improve usability, listen to customer feedback, and iterate towards a stable product. In Wardley’s terms, “They make the possible future actually happen” by turning prototypes into profitable products. A Villager values user feedback and steady improvement – they don’t mind that the initial idea came from someone else; their pride is in building it right and delivering consistent value.
-
Town Planners excel in the Commodity or Industrialized phase – when a product or service needs to be delivered at scale, with high reliability, efficiency, and lowest possible cost. Town Planners are brilliant at industrializing a solution. They take something that works and make it faster, more efficient, and highly dependable. In their hands, a once-novel invention becomes a robust utility that millions can depend on. “You trust what [Town Planners] build,” Wardley says. They thrive on standardization, optimization, and eliminating every edge-case bug – the “pursuit of a flawless system,” as one description puts it. Town Planners implement strict processes, focus on scalability, and hunt down elusive imperfections to ensure consistent, error-free performance. In short, they turn a product into a commodity service that just works reliably, day in and day out.
Despite their differences, Wardley emphasizes that all three archetypes can be brilliant people – they simply have different attitudes and aptitudes suited to different stages of evolution. Moreover, each group’s mindset is not “better” or “worse” in an absolute sense; it’s about fitness for context. In fact, a company needs all three roles to function well in the long run. As Wardley notes, an Explorer might awe you with a breakthrough, but “you wouldn’t trust what they build” for everyday use, whereas you do trust a Town Planner’s infrastructure but you need the Explorer’s crazy ideas to find new directions. Successful innovation is a relay from Explorers to Villagers to Town Planners over time.
Figure: The Explorer, Villager, and Town Planner archetypes – Explorers venture into uncharted territory (left, akin to pioneers on a wagon trail), Villagers build on discoveries to create useful products (center, constructing a house), and Town Planners establish the scaled infrastructure to industrialize solutions (right, managing complex pipelines).
Caption: The three archetypes in the EVTP model correspond to phases of evolution. Explorers thrive on exploring the unknown, Villagers turn promising ideas into practical products, and Town Planners build scalable, reliable systems. Each has a unique mindset and strengths, and all are needed for sustainable innovation.
With this EVTP framework in mind, we can better understand why the GenAI community is split: the “explorer mindset” and the “town planner mindset” are evaluating GenAI with completely different expectations. They are, in effect, talking past each other because they’re operating from different phases of the technology’s evolution. Let’s look at each camp through the EVTP lens.
The Explorers’ Perspective: GenAI as a New Frontier of Possibility¶
Many of the loudest cheerleaders of GenAI today embody the Explorer attitude. These are the researchers, developers, and tinkerers who have been playing with large language models (LLMs) and discovering, to their delight, that GenAI can do things that were “just not possible” a few years ago. From the Explorer’s vantage point, GenAI is an exciting Genesis-phase technology that is rapidly evolving and showing magical results in prototype form.
Explorers focus on what works and what’s possible, often downplaying the failures as mere learning steps. They’re the ones building quirky ChatGPT plugins, automating portions of their coding tasks, generating creative content, and rapidly prototyping AI-driven applications. To an Explorer, every week brings a new breakthrough model or integration: one week it’s GPT-4 passing professional exams or writing code; the next it’s a new open-source model that can generate images or music. The feeling is that we’re in a technological Cambrian explosion, and the Explorer is exhilarated by the pace of change.
Indeed, for an Explorer, GenAI already feels world-changing. They see examples everywhere of AI augmenting or outperforming human work in some way. For instance, code-generation assistants can now build simple apps from natural language descriptions; as one tech leader quipped, “The hottest new programming language is English.” This phrase from Andrej Karpathy (an AI pioneer) captures the Explorer ethos: you can “just say what you want” and the AI will produce working code, a process popularly dubbed “vibe coding.” In vibe coding, you “forget that the code even exists” and iteratively prompt an AI to build your software. Explorers have embraced this paradigm and already report remarkable successes in hackathons and demos. The CEO of OpenAI himself (Sam Altman) predicted that *software engineering will be *“very different” by the end of 2025 thanks to AI. Similarly, tech CEOs like Mark Zuckerberg have mused that AI could soon do the work of many mid-level engineers. Such statements reinforce the Explorer community’s belief that huge changes are imminent.
From the Explorer perspective, GenAI can (and should) be tried everywhere. They often say things like: “ChatGPT can help in any domain – just give it the right prompt!” or “Why not replace or automate this task with an AI? Let’s experiment.” If a few early experiments fail, no matter – Explorers expect failure as part of the journey. In Wardley Map terms, these folks are operating in the uncertain Genesis/custom-build zone, where “failure is not just accepted, it is encouraged”. Every misstep is a lesson that spurs the next iteration. When something does work – like an LLM that can draft a decent marketing email or debug some code – it is met with wonder and excitement. It’s not an exaggeration to say Explorers sometimes appear to be “in awe” of what GenAI can already do, even if it’s imperfect. They speak about GenAI with the wide-eyed enthusiasm of someone witnessing magic.
This excitement sometimes leads Explorers to sweeping, optimistic conclusions. For example, hearing that “AI will replace all coders” or “We can just vibe-code entire applications without hiring developers” is common in this camp. In early 2023 and 2024, startups rebranded themselves to include “AI” in their names to attract investment, and businesses rushed to sprinkle AI into every product – a hallmark of the hype wave. It’s not just outsiders; many Explorers themselves genuinely believe that we are only a few breakthroughs away from artificial general intelligence or at least from automating vast swaths of knowledge work. After all, if an AI model can pass a bar exam, win at coding competitions, or instantly generate a working website from a description, who’s to say even more dramatic capabilities aren’t around the corner?
Crucially, Explorers often treat GenAI prototypes as if they were products. Because today’s GenAI tools come with slick UIs and seemingly polished interactions (ChatGPT’s fluent conversations, Copilot’s seamless code suggestions), it’s easy to get the illusion that these systems are more reliable and productized than they really are. As Dinis has observed, GenAI’s impressive demos can fool even experienced people into forgetting that behind the curtain, these are still early-stage, non-deterministic systems. It “feels like a product” already, so Explorers sometimes act as though all the hardening and fine-tuning for real-world use is a mere formality.
The result: Explorers readily imagine GenAI everywhere – in every app, embedded in every workflow, disrupting every industry. They are quick to propose replacing human processes with AI or launching new AI-driven features. They celebrate the successes (a chatbot that can answer 80% of customer queries, an AI that generates passable graphics, etc.) and are comparatively unfazed by the failures (like when the same chatbot gives a wrong answer), because in their Genesis mindset, any working instance is proof of concept that it’s possible, and the failures are just bugs to fix later.
To sum up, the Explorers see GenAI as immensely promising and malleable. They’re living in the realm of possibility. Their battle cry is essentially “Look what works – let’s push it further!” And history shows that we owe a lot to these optimistic explorers, because they expand the boundaries of what we think technology can do. However, to someone with a different mindset, this enthusiastic approach can appear naïve or reckless – and that’s where the Town Planners come in.
The Town Planners’ Perspective: GenAI Under a Harsh Spotlight¶
On the other side of the spectrum, the Town Planners in the GenAI debate are approaching the technology from a totally different angle. These are often seasoned software engineers, architects, risk managers, or domain experts in fields like healthcare, finance, or safety-critical systems. They operate in contexts where reliability, consistency, and safety are paramount – the Product/Commodity end of Wardley’s evolution scale. From the Town Planner perspective, a tool or system isn’t truly valuable until it’s rock-solid in production. Thus, when Town Planners examine GenAI, they home in on everything that can go wrong, because any failure or inconsistency in their world can be a showstopper (or even a disaster).
Town Planners acknowledge GenAI’s potential, but their attention is dominated by its current shortcomings. They’re the ones reminding everyone that “LLMs don’t actually understand or think – they just predict text based on patterns”. In online discussions, you’ll often find comments like: “the LLM doesn't 'think' – it just spits out a sequence of tokens… it would gladly go on forever generating more output, making things up as it goes”. This camp is quick to point out that GenAI systems have no true reasoning or comprehension; they lack the grounded common sense of even a child. In practical terms, that means an LLM can produce outputs that look confident and authoritative but are actually nonsense – the dreaded hallucinations. A Town Planner sees a hallucination not just as a funny mistake, but as evidence that the system is “not fit for purpose” for any critical task where truth and accuracy matter.
Furthermore, Town Planners highlight the “massive hype” around GenAI. They have likely seen hype cycles come and go and are instinctively skeptical of grandiose claims. When Explorers claim “AI will replace X profession” or “we can automate Y completely,” the Town Planner reflex is to raise an eyebrow and ask for proof. They might invoke the Gartner Hype Cycle – expecting that after the current frenzy, we’ll enter a “trough of disillusionment” once people realize the limitations. Indeed, by late 2024, some analysts were already observing the GenAI hype bubble deflating and warning of a reality check. Town Planners feel it is their duty to cut through the noise and make sober assessments.
So what do Town Planners see when they scrutinize GenAI? They see a list of serious concerns that must be addressed before trusting these systems widely:
-
Hallucinations and Accuracy: LLMs can generate false information with a straight face. A Town Planner will point out that “hallucination is a major issue” and that an AI often needs a much lower error rate than any human to be acceptable in production. For example, an AI assistant that makes up one wrong medical instruction or one nonexistent legal citation can have catastrophic consequences – far worse than a human making a typo. (In 2023, a New York lawyer learned this the hard way when he used ChatGPT to draft a brief; the AI fabricated case law, leading to the lawyers being sanctioned for submitting false information to the court. The judge fined them, noting they had “failed to believe that a piece of technology could be making up cases out of whole cloth”. Such incidents underscore exactly why Town Planners are extremely cautious.)
-
Lack of Determinism: In many enterprise and safety contexts, you need to know that given the same input, the system will produce the same (correct) output every time. Traditional software can guarantee that, but GenAI models are stochastic by nature. They might output something 90% correct one run and subtly different the next. To a Town Planner, this unpredictability is unacceptable for, say, a banking ledger, an aircraft control system, or even a customer support chatbot that might one day offer a refund it shouldn’t. Town Planners often say: if we cannot fully trust and predict the AI’s behavior, we cannot deploy it at scale without human oversight on every response – which largely negates the efficiency gains.
-
Scalability and Performance: Explorers might be happy when a prototype works on a sample dataset, but Town Planners worry about real-world scale. Will the GenAI system handle millions of requests quickly and reliably? As one tech leader noted, “it is so easy to build a charismatic prototype... [but] so hard to get from [there] to something that is valuable and rock-steady in production.” Many GenAI models require heavy compute; Town Planners see latency and cost challenges in using them in production (e.g. a query taking 10 seconds and a lot of GPU power – too slow and expensive for a live service). They also worry about edge cases – those weird inputs or situations that weren’t in the demo, but will happen in real life and could break the system.
-
Security and Privacy: Town Planners handle data compliance, security audits, and uptime guarantees. An Explorer might eagerly plug customer data into an LLM API to see what insights it gives, but a Town Planner will balk: Is the data safe? What about privacy regulations? Early GenAI applications have already raised red flags – for instance, employees feeding sensitive info into ChatGPT (which then becomes part of the model’s training data), or an AI image generator leaking parts of its training data. In high-regulation industries, these are showstoppers until mitigated.
-
Maintainability and Integration: Any production system needs monitoring, version control, testing, and the ability to fix bugs. Town Planners point out that current GenAI often behaves like a black box. If it gives a wrong answer, how do we debug that? How do we test an AI thoroughly? Also, integrating an AI system into existing software pipelines is non-trivial – it introduces new points of failure and complexity (like model updates, prompt maintenance, etc.). All of this requires careful engineering (sometimes called MLOps for AI systems). In short, Town Planners see a huge amount of “glue” and support infrastructure that needs to be built around the AI before it’s truly production-ready.
Given these concerns, the Town Planner mindset tends to issue cautionary verdicts on GenAI. Common refrains from this camp include: “LLMs are an impressive demo, but not ready for prime time.” Or “You absolutely shouldn’t trust an AI to do X in its current form.” They are quick to note that behind every successful GenAI product today, there is usually a lot of non-AI engineering making it reliable – from embedding databases to handle memory, to human feedback loops, to rule-based filters catching the AI’s mistakes. In Dinis’s research on GenAI engineering, he emphasizes the need to invest heavily in Non-Functional Requirements (NFRs) – things like security, reliability, version control, auditing – when building GenAI solutions. The Town Planner instinct is very much aligned with that: without meeting NFRs, they won’t trust the system.
In fact, some Town Planners argue that we should not deploy LLM-based solutions in certain environments at all until they improve. If you need a “high degree of reliability and performance,” perhaps an LLM is the wrong tool today, they argue. Use a deterministic algorithm, or a database, or a traditional software approach – something proven. From this view, trying to insert GenAI into a mission-critical pipeline right now is like trying to use a half-baked prototype where a rock-solid utility is needed. It’s just asking for trouble. As Wardley’s mapping would put it, Town Planners see LLMs as still mostly in the Genesis or Custom-Built stage, not yet evolved enough to be a Commodity component for general use.
To sum up, the Town Planners see hype outpacing reality with GenAI. They focus on what’s wrong and what could go wrong, because their world is one where failure is not an option. If an Explorer is marveling at the “magic” of an AI-written script, the Town Planner is the one pointing out that the script might work 9 times but on the 10th time it could do something dangerously incorrect – and that’s not okay if real users or business are at stake. In their eyes, GenAI is interesting, but immature and in many cases inadvisable to rely on just yet.
Both Camps Are “Brilliant” – and Both Are Right (From Their Point of View)¶
Reconciling these two perspectives is not easy – they almost sound like they live in different worlds. But the Wardley Maps EVTP model tells us an important thing: both the Explorers and the Town Planners are brilliant, and both are right, within their own context. The explorer who proclaims GenAI’s boundless potential and the town planner who tempers that with skepticism are each seeing truths – the difference lies in where they are standing.
It comes down to base assumptions and success criteria:
-
The Explorer camp is operating with Genesis-phase assumptions: that something only needs to work sometimes to be valuable, that failures are learning opportunities, and that speed of innovation outweighs perfection. By those measures, GenAI is already a huge success. There are countless examples of LLMs working – writing code, answering questions, generating designs – tasks that would either take humans much longer or were not possible before. Every new capability unlocked is a cause for celebration. And the many failures? They’re just the price of exploration. In a hackathon or R\&D lab, if an AI gives wrong answers 40% of the time but 60% of the time it does something useful, an Explorer considers that a great tool (they’ll try to improve the 40%, but meanwhile they’re benefiting from the 60%). Explorers measure GenAI by its potential and breakthroughs, not by consistency.
-
The Town Planner camp is using Product/Commodity-phase criteria: that something needs to work reliably at scale to be valuable, that failures are unacceptable or very costly, and that robustness and safety outweigh novelty. By those measures, current GenAI is largely not a success – it’s a tantalizing prototype at best. A solution that is “amazing 60% of the time, and flawed 40%” is unusable in production for a Town Planner; even 99% correct might be insufficient if that 1% error could cause serious harm. Town Planners measure GenAI by its worst failures and limitations, not by its occasional flashes of brilliance. Because in their world, a chain is only as strong as its weakest link.
Simon Wardley often paraphrases this tension by saying each group is right from where they stand. The Explorers are correct that GenAI can do astonishing things and can be pushed into many new applications when used in the right context. The Town Planners are correct that if you treat GenAI uncritically or deploy it naively in high-stakes scenarios, you’re courting disaster, because the tech truly isn’t fully baked yet and the hype is out of control. Each side sees a part of the truth. The problem is, if they don’t recognize the context, they will keep talking past each other:
-
An Explorer showing a successful GenAI demo may get frustrated when a Town Planner only zeroes in on the hypothetical failure modes. “Why can’t you see how powerful this is? It worked, didn’t it!” The Explorer thinks the Town Planner is being overly negative or stuck in old ways, ignoring the massive opportunity right in front of them.
-
Conversely, a Town Planner raising legitimate concerns may face eye-rolls from Explorers. “You’re just being a Luddite or a wet blanket. Sure it’s not perfect, but neither was the internet in 1995!” The Town Planner thinks the Explorer is being irresponsible, possibly blinded by shiny new tech and not thinking about consequences.
According to EVTP, both attitudes are not only valid but necessary for an organization to innovate safely. The key is understanding when and where each mindset should dominate. In early exploration, you need the Explorers to move fast and find out what’s possible. In later stages, you need the Town Planners to refine and bulletproof the solutions. Problems arise when there’s a mismatch – like trying to force an exploratory prototype directly into a production environment (without the needed transition), or conversely, applying stringent production controls too early and stifling innovation.
It’s also worth noting the often-forgotten middle role: the Villagers (Settlers). In the current GenAI saga, Villagers are the ones who take the cool demo from an Explorer and methodically turn it into a product that a Town Planner could eventually industrialize. We don’t hear about them as much in the hype or the skepticism, because Villagers work more quietly, bridging the gap. But they’re crucial. For example, consider an AI startup that takes the latest research model (an Explorer output) and then spends months adding a user interface, a monitoring system, fine-tuning on domain data, integrating it with a database, and putting guardrails around outputs – that’s classic Villager work: turning a prototype into a polished product. Many GenAI applications succeeding today (from GitHub Copilot to AI assistants in software) owe their success to this Villager-like effort of productization. Eventually, once that product is stable and the market understood, Town Planners can scale it up and make it a utility.
Wardley Maps and EVTP give a sense of inner peace because they remind us that it’s natural for people to behave differently when facing unknown vs. known situations. Dinis often says that frameworks like EVTP “explain why things happen the way they do, and why individuals/companies behave the way they do.” Instead of getting upset that “so-and-so just doesn’t get it” or that “this group is being reckless,” we can acknowledge that Explorers gonna explore, Town Planners gonna plan – each is doing what comes naturally given their context and experience. Realizing this can turn frustration into understanding. In fact, when harnessed properly, the tension between the two can be very productive: the Explorers drive change, the Town Planners ensure stability, and the Villagers help translate between the two worlds.
The Danger of Blind Spots and Extreme Decisions¶
While both camps have their truths, problems occur when either side’s blind spots lead to extreme decisions without regard for context. Let’s examine two cautionary scenarios:
-
Explorer-Led Overreach: When decisions are driven solely by the Explorer mindset, there’s a risk of irrational exuberance. In the GenAI boom, we’ve seen cases of leaders or companies rushing to reshape their workforce and strategy around AI without fully understanding its limits. For instance, some managers have prematurely considered firing employees because “AI will replace them.” An Explorer-dominated view might say, “Why keep so many copywriters or junior developers? We have ChatGPT/Copilot now!” There have been reports of companies reducing staff after adopting AI tools, only to find the tools couldn’t fully deliver the hoped gains, leading to gaps in productivity. This is reminiscent of earlier hype-driven mistakes (recall how some businesses in the early days of the internet or outsourcing made drastic cuts expecting immediate automation that didn’t pan out). The Wardley Maps lens would warn: if you treat a Genesis-phase technology as if it were a Commodity, you’re in for trouble. Firing your experienced team in hopes that an LLM can do all their work is likely a bad decision based on a blind spot – the Explorer’s tendency to overlook how much value human judgment and expertise still contribute, and how much maturation the AI still needs. In short, overestimating GenAI can burn you. It’s like deploying a half-tested prototype to millions of users – the failures will come back to haunt you.
-
Town Planner-Led Missed Opportunities: Conversely, when the Town Planner mindset exclusively calls the shots, the risk is overconservative inertia. Some organizations have been so turned off by the hype and the imperfections of GenAI that they are choosing to “not embrace GenAI at all” because it’s deemed “not good enough yet.” A strict Town Planner may say: “Our policy is to ban use of tools like ChatGPT internally until they are proven 100% reliable. We’ll revisit AI in a few years.” This stance avoids short-term risk, but it may incur a larger strategic risk of falling behind. If competitors (with a healthy Explorer-Villager mix) start using GenAI to boost productivity (even with human oversight), the overly cautious company might miss out on incremental gains and learning. There is a pattern across industries that those who refuse to experiment during the fluid, early phase of a technology may struggle to catch up later when the tech matures. In the 1990s, some businesses dismissed the web as a toy – until suddenly e-commerce disrupted their market. Likewise, a blanket “wait until it’s perfect” approach to GenAI could mean missed opportunities and innovation lag. The blind spot here is the Town Planner’s tendency to undervalue the potential and the speed of improvement. GenAI is improving rapidly, and finding safe, small ways to pilot it (with appropriate checks) is often smarter than ignoring it entirely.
In both extremes, the lack of balance leads to outcomes that Wardley Mapping would predict as failure modes. Explorers without Town Planner oversight can run off a cliff; Town Planners without Explorer input can stagnate. History is rife with examples in all industries: companies that bet too big on an immature tech and crashed, and companies that dismissed a trend until it was too late. As one commentator in a machine learning forum noted, when discussing why many AI projects stall, “I see the same as well. Not a new phenomenon. Has happened again and again with previous tech trends too.”. Indeed, this pattern of hype and skepticism is cyclical. The names change (whether it was the personal computer, the internet, smartphones, or now GenAI), but the dynamic of Explorers vs. Town Planners is always at play. Recognizing this pattern can help leaders avoid repeating the mistakes.
The key is decision-making with context. In strategic planning, it’s vital to ask: Are we in a Genesis situation here, or a Commodity situation? If you’re dealing with something genuinely novel (like figuring out how to apply an LLM to a new problem), lean into the Explorer approach – encourage experimentation, accept failures, but don’t bet the farm on immediate reliability. If you’re dealing with something that needs to be as predictable as a utility (like financial transaction processing), you must impose Town Planner rigor – maybe the GenAI piece plays a smaller advisory role there until it matures, or is used with human review. Wardley Maps even advocates “appropriate methods” for each stage: agile, trial-and-error methods in Genesis; Six Sigma style control in Commodity. Problems occur when the method doesn’t match the stage.
Finding Balance and Moving Forward¶
So, how can we move forward in the GenAI revolution without falling into either trap? The answer lies in balance and timing – effectively integrating the Explorers, Villagers, and Town Planners in our approach to GenAI:
-
Acknowledge the Evolutionary Stage: Broadly, GenAI technology today (large language models, diffusion image generators, etc.) is somewhere between Genesis and Product on Wardley’s evolution scale. We have plenty of custom-built prototypes and a growing number of products built on GenAI, but very few aspects of it are true Commodity (plug-and-play utilities) yet. By recognizing this, we set appropriate expectations. We wouldn’t expect a custom R\&D prototype to be flawlessly reliable – so we shouldn’t expect that yet of most GenAI systems without significant engineering. On the flip side, we know genesis-phase technologies can rapidly evolve (think how quickly electricity or the automobile went from curiosities to utilities in past eras), so we also shouldn’t assume the current limitations will last forever.
-
Foster Communication Between Camps: If you work in a team or company dealing with GenAI, actively facilitate dialogues between the excited innovators (Explorers) and the seasoned operators (Town Planners). Each should educate the other on their concerns and insights. For example, let the Explorer demo what the AI can do – but also let the Town Planner perform some stress tests or highlight what happens in worst-case scenarios. Both sides might learn something. Instead of a tug-of-war (“deploy now!” vs “never deploy”), aim for a collaborative approach: How do we get this to a state where we can deploy it responsibly? That’s where the Villager mindset often comes in – translating the Explorer’s prototype into something closer to the Town Planner’s standards.
-
Invest in Bridging Solutions: Many gaps between what Explorers want and what Town Planners need can be filled with smart engineering and process. For instance, techniques like Retrieval-Augmented Generation (RAG) and semantic knowledge graphs are being developed to reduce hallucinations by grounding the AI in factual databases. Dinis has been involved in solutions using Semantic Knowledge Graphs to give GenAI a firmer grasp of truth and context. By linking an LLM to a curated knowledge graph (GraphRAG, for example), the AI’s outputs can be checked or guided by real data, thus satisfying some Town Planner requirements for accuracy and auditability. Similarly, robust NFR frameworks – covering security (no leaking data), version control (so you know which model version did what), reliability monitoring (to catch drifts or anomalies) – can be layered around GenAI deployments. In essence, treat the GenAI component not as a standalone magician, but as part of a larger engineered system with safety nets. This is the Villager-style work that can gradually make the Explorers’ discoveries palatable to the Town Planners.
-
Start Small, Then Scale: Rather than outright banning GenAI until it’s perfect, Town Planners can identify low-risk, high-reward pilot projects to test the waters. Likewise, Explorers should identify critical areas where caution is warranted and agree not to force AI in there yet. By starting with “safe-to-fail” experiments (for example, using an LLM internally to draft reports, rather than directly messaging customers; or deploying AI for beta users only), you can learn and build confidence. As the technology proves itself, expand its use. This iterative approach allows for the co-evolution of the technology and the organizational capability. It’s exactly how a prototype transitions to a product in Wardley’s model – gradually, with feedback loops.
-
Keep Humans in the Loop (for Now): One practical compromise in many GenAI applications today is to keep a human in the loop as a fail-safe. Explorers might chafe at this (“it reduces efficiency!”), but it’s a reasonable interim step. For example, AI-written content can be reviewed by an editor, AI-driven code can be code-reviewed and tested like any other code, AI decisions in a business process can require human sign-off for edge cases, etc. This hybrid approach addresses Town Planner concerns while still leveraging Explorer innovations. Over time, as the AI components become more proven, the human oversight can be dialed back. Essentially, the organization can adjust the balance between AI automation and human control as trust increases.
Finally, it’s important to embrace the mindset that Explorers, Villagers, and Town Planners are allies, not enemies. Each brings something vital to the table. As Simon Wardley humorously noted, in the past society sometimes “burnt the explorers at the stake” for their crazy ideas – we should avoid doing the modern equivalent (dismissing or punishing innovators for thinking outside the box). Likewise, we shouldn’t disparage the Town Planners as mere obstructionists – they are the ones who will later save us from blowing ourselves up with our new tech. Wardley’s EVTP model, when implemented in organizations, even formalizes a “mechanism of theft” – meaning, at a certain point, you want the Villagers to steal a project from the Explorers to scale it up properly, and later the Town Planners to steal it from the Villagers to industrialize it. It’s a healthy hand-off.
In conclusion, the seemingly unrealistic conclusions that clever people reach about GenAI (“it will solve everything!” vs “it’s mostly hype!”) can be understood by recognizing which hat they are wearing – Explorer, Villager, or Town Planner. The GenAI community’s split is a classic case of different archetypes viewing the same elephant from different sides. By applying the EVTP lens, we not only explain this divide but also see a path to reconcile it. Explorers need to appreciate the value of the Town Planners’ skepticism, to ensure their inventions don’t flame out due to unaddressed flaws. Town Planners need to appreciate the Explorers’ vision and not strangle a promising innovation in its crib with premature demands for perfection. And organizations as well as individuals should nurture the Villager translators who turn nascent GenAI capabilities into robust solutions.
GenAI is a transformative technology, but it is still on its evolutionary journey. If we use the right mindset at the right time – celebrating the wonders in the lab, while methodically engineering reliability for the field – we can avoid the worst pitfalls of both hype and fear. As Dinis Cruz often notes, frameworks like Wardley Maps EVTP have given him an inner calm: a realization that everything happening – the excitement, the pushback, the trials and errors – is exactly what we should expect at this stage of evolution. That understanding is empowering. It means the goal is not to prove one camp “right” and the other “wrong,” but to let each play its role and guide GenAI from a raw, potent idea into a mature, ubiquitous utility. In the end, today’s explorers will seed tomorrow’s AI-augmented world, today’s skeptics will make it safe and reliable – and we’ll wonder why we ever argued so fiercely in 2025 about a future that, in hindsight, just needed time to unfold in the right way.
References:
-
Wardley, Simon. “How to organise yourself - the dangerous path to Explorer, Villager and Town Planners.” Bits or Pieces (blog), Dec 7, 2023. Describes EVTP as a system for organizing companies in evolving environments.
-
Wardley, Simon. Ibid. Defines the characteristics of Explorers (core research, high failure, “what if” magic) and Town Planners (industrialization, efficiency, trust).
-
Codetv.dev Blog. “Understand your work archetype: explorers, villagers, and town planners.” Provides a practical breakdown of how each archetype thrives and struggles (e.g., explorers thrive on ambiguity and failing fast, town planners pursue flawless systems).
-
Business Insider. “Silicon Valley's Next Act: Bringing 'Vibe Coding' to the World.” (Feb 2025). Discusses Andrej Karpathy’s “vibe coding” concept and includes quotes: “The hottest new programming language is English.” – highlighting explorer optimism about AI-assisted coding. Also notes Altman and Zuckerberg predicting major changes in programming due to AI.
-
Sankar, Shyam (Palantir CTO). Quoted in Bitstrapped Blog, “The LLM Mirage: Why Dazzling Prototypes Often Crumble in Production.” (Oct 2024). “It is so easy to build a charismatic prototype... so hard to get to something valuable and rocksteady in production.” Highlights the prototype-to-production gap.
-
Bitstrapped Blog. Ibid. Enumerates production challenges for LLM applications: scalability, reliability, latency, cost, security, explainability, monitoring.
-
Reddit r/MachineLearning discussion. “Challenges moving LLM applications to production.” (2024). Emphasizes that “Hallucination is a major issue… LLMs are held to a higher standard than humans” for business viability. Notes that many companies lack a framework to assess AI risks vs benefits and that similar hype cycles have occurred before.
-
Reuters. “New York lawyers sanctioned for using fake ChatGPT cases in legal brief.” (June 26, 2023). Real-world example of AI hallucination causing harm: lawyers fined after an AI-generated brief cited six nonexistent cases, with the firm admitting they “failed to believe” the tech could fabricate so convincingly.
-
LinkedIn – Dinis Cruz. “How GenAI workflows add value to businesses.” (June 2025). Dinis underscores the importance of having NFRs (Non-Functional Requirements like security, reliability, etc.) in place for GenAI tools and workflows – reinforcing the Town Planner view that engineering foundations are crucial.
-
LinkedIn – Post on GraphRAG. “The journey towards a knowledge graph for generative AI.” (2024). Describes how Semantic Knowledge Graphs integrated with GenAI (GraphRAG) can improve accuracy and relevance by providing context – an approach to address some GenAI limitations for production use.