Key Takeaways
- AI makes work easier, but knowing what to solve matters most.
- Better context, process and judgement beat better prompts alone.
- Your edge is what you understand, believe and uniquely see.
We built an app live in front of a room
At a recent Exeter Business Network event, we tried a slightly risky experiment.
I delivered a talk about AI while Richard, my business partner, sat beside me with a machine, a screen, and a live coding environment open.
The idea was simple, I would take a brief from the room. Richard would turn that brief into an application while the talk was happening.
By the end of the hour, we had produced and deployed a working app for the people in the room.
Not a polished commercial product. Not something pretending to be finished. A working prototype, built from the live suggestions, context and frustrations of the audience.
Instead of speaking about AI as an abstract idea, we could show what happens when you combine a clear enough brief, a real audience, the right tools and someone who knows how to steer the process.
This article holds the tips we shared in the talk, as well as the narrative arc of going from understanding AI development in 2026, through to what will help you create apps, content, ideas or even businesses for this future world.
But lets start right at the beginning…

Why we asked the audience for the brief
We chose to take the brief from the room because everyone there understood the context.
They were all at the Exeter Business Network event. They knew what networking feels like when it works well. They also knew the awkward bits, the missed opportunities, the forgotten conversations and the moments where you wish the process was just a bit easier.
So rather than inventing a fictional product, we asked a simple question:
“What would make this network more valuable to you tomorrow than it is today?”
That gave us a useful starting point. The responses were gathered, exported and turned into a brief Richard could work from.
Some responses were practical, like a directory of expertise, real-time referrals, and easier ways to connect after an event. Others were more human, including the request for “Richard Wain’s beard grooming tips”.
That kind of answer is easy to laugh at, but it also proves the point. Good digital ideas rarely start as neat technical requirements.
They usually start as small frustrations, habits, jokes, repeated questions and bits of context that only make sense when you understand the people involved.
What made the networking assistant a useful test
A networking assistant worked because the audience already knew the problem.
We were not asking them to imagine a made-up business, a fictional customer journey or a theoretical app. We were asking them to improve something they were actively experiencing in that moment.
That meant the brief had real context behind it: how can you layer the customer experience of networking member?
The app could respond to genuine needs in the room, rather than a generic idea of what networking software should do. It could explore how people find useful contacts, remember conversations, understand who is in the room and get more value from the network after the event.
That is why the exercise was useful.
The app itself was not the whole point. The important bit was seeing how quickly a shared understanding could become something testable.
Within an hour, the gap between idea and prototype had gone. But the live build also showed that the technology was only part of the story.
The audience mattered. The questions meant something. The context was important.
The what: the tools we are (currently) using
The first chapter of the talk was the “what”.
What are the tools? What can they do? And what are people starting to do differently because of them?
I did not want to turn this into a deep technical tutorial, because there are already plenty of those. You can lose hours on YouTube learning the detail of Cursor, Claude, ChatGPT and the rest.
The point I wanted to make was simpler.
The tools are already good enough to change how work gets made.
Cursor, for example, is a development environment with AI built into the workflow. It looks and behaves like a coding tool, because it is one, but the important shift is that you can now work with code through dialogue. You can describe what you want, ask it to make changes, review the output, test it, and keep moving.
That does not remove the need for judgement. It probably increases it.
But it does mean the distance between idea and working version is getting shorter.
Why better prompts are not enough
The biggest change we have seen is not just that people are writing better prompts.
It is that people are beginning to understand the process around the prompt. Take a blog article as a simple example.
If you ask an AI tool to write a blog article, you will probably get something that looks like a blog article. It may have headings, paragraphs and a reasonably confident tone. But it will often feel thin, because the tool is having to guess too much.
It does not know where the article sits in the marketing funnel.
It does not know the keyword intent.
It does not know the narrative arc.
It does not know the audience, the objections, the examples, the call to action, or the commercial purpose behind the piece.
So the better question is not “can AI write me a blog?”
The better question is “what process would I normally follow to create a useful article, and how can I guide the tool through that process?”
And of course this applies to every process, not just the art of content creation.
Humans are learning how to use robots, instead of asking for an output, we are starting to design a workflow for them to follow.
Process to creating quality outputs
That is why custom GPTs, Claude skills and agent-based workflows are becoming more interesting.
They are not magic. They are ways of giving the tool a repeatable process.
A Claude skill, for example, can hold a small instruction file inside a project. That file can explain how you do a particular task, what matters, what order things should happen in, and what standards need to be followed.
You can do something similar with a custom GPT. Rather than starting from scratch every time, you can build a process around a specific type of work.
Another tip that speaks to this is asking an LLM to facilitate a conversation.

If you are developing an application, you might ask it to bring several perspectives into the room: a front-end developer, a back-end developer, a designer, a user experience tester, a customer and a business administrator.
That creates a richer conversation and output than one flat instruction, and far less hallucinations.
AI tools are most useful when we stop treating them like vending machines for answers and start treating them like collaborators inside a process.
Context is becoming a practical advantage
Voice tools make this even more obvious.
We have been using Whispr Flow because it lets you press a button, speak naturally, and drop the cleaned-up text into almost any box you can type into.
That might sound like a small thing, but it changes the amount of context you can give.
Most people say more than they type. They explain the background, the nuance, the reason something matters, the thing they are worried about, and the edge case they forgot to mention in the first sentence.
Pretty Prompt is useful for a similar reason. It helps show the difference between asking for something vague and giving the tool enough context to understand the audience, the problem and the intended outcome.

“Make me an app” is not much of a brief.
“Build a {type of app} {for this audience}, to help them {solve this problem}, {in this context}” gives the tool far more to work with.
So the real message of the first chapter: The tools are good, but the way we learn to frame the work they do matters more.
At this point in the live session, I checked in with Richard. He was still working through the possible functionality in Cursor, turning the audience’s responses into something the app could use.

The how: AI vs AGI, and the wider landscape
The second chapter was the “how”. Not how to use the tools, but how to understand what is happening with AI.
Because when people talk about AI, they often bundle a lot of different ideas together. ChatGPT, image generation, automation, coding assistants, artificial intelligence, artificial general intelligence. It all gets pulled into the same conversation.
AI is pattern recognition
The simplest way I explained it in the room was this:
AI is pattern recognition.
A large language model is recognising patterns in language. It takes what we say, looks at the relationships between words, ideas and examples in its training, and produces an answer that fits the pattern of the request.
So when we talk to it, we are shaping a conversation with ourselves.
If we ask for something broad and thin, the tool has to fill in the gaps itself. If we give it a richer conversation, it has more to work from.
But it is still working inside the box.
That is the important bit.
AGI is cross-context thinking
Artificial general intelligence is a different idea.
AGI is not just pattern recognition inside a defined task. It is the idea of intelligence that can think across contexts. It can understand what is around the box, not just what is inside it.
That starts to sound much closer to a human mind.
A system that can reason, adapt, learn, pursue goals and apply understanding across different domains would be very different from the tools most of us are using now.
That is why the AGI conversation is so much bigger than productivity.
Why this raises bigger questions
In the book Our Final Invention by James Barrat, it explores the idea of a machine intelligence that becomes self-aware.
He argues that anything self aware seeks improvement. The uncomfortable part in this is not that AI suddenly becomes cartoonishly evil, and theres an apocalyptic end.
We might not be the target. We might not be the enemy. We might simply not fit the goal and be something in the way of it.
If something can improve itself, and if it can keep pursuing a goal without sleeping, ageing or dying, then the scale of its intelligence could move beyond us very quickly.
That is a different kind of risk, entirely avoidable but also increasingly likely for the reason that we aren’t building safety rails.
The real risk is a race without a shared map
Humanity is not enbracing this new technology as one careful, united group.
We are fractured.
Countries are fighting other countries. Businesses are racing other businesses. Defence, big data, energy, money and market advantage are all tied up in the same direction of travel.
Everyone wants to implement AI faster than someone else.
Even if many people involved in AI are thoughtful, careful and well-intentioned, the wider system is not calm. It is competitive. It is uneven. It is full of different incentives.
That is why AGI is not just a technical question. It is a human one.
During the seminar, this was the point where we checked back in with Richard. By then, there was something visible on the screen. A side navigation had started to appear, with pieces of functionality taking shape. Not everything was wired up yet, and not all of it worked, but the app was becoming real.
“Success in creating effective AI could be the biggest event in the history of our civilisation. But it could also be the last, unless we learn how to avoid the risks.”
– Prof. Stephen Hawking
The where: attention is the real advantage
The third chapter was the “where”. Where do we point this capability? Or in other words, where do you put your attention?
That felt like an important question to ask after talking about AI tools and the bigger idea of AGI. Because once you start to see what these systems can do, the next question should not be “what can I automate?”
It should be “what is worth solving?”
That was the reason I talked about Demis Hassabis and DeepMind.
DeepMind was not built around the idea of making another small productivity app. Hassabis had been deeply interested in AI from a young age, and the mission behind DeepMind was to build AI responsibly and use it to benefit humanity.
That is a very different starting point.
What DeepMind teaches us about goals
In the documentary the thinking game, DeepMind use games to develop AI. The reason is that games have parameters and goals.
They started with Pong, and gave it a simple goal: get points.
The system was not told how to use the bat. It had to work that out. At first, it was terrible. Then, after hundreds of attempts, it started to improve. Eventually, it became extremely difficult to beat.
That sounds playful, but it teaches something serious.
If you give a system a goal, feedback and enough attempts, it can discover routes that were not directly programmed into it.
The human does not have to explain every move and the system learns by trying.
From games to real-world problems
The Thinking Game documentary follows this journey from games into much harder problems.
AlphaGo showed what could happen when AI was applied to a game with enormous complexity. It did not just play Go well. It made moves that surprised expert players, because it was exploring patterns and possibilities differently.
Then came AlphaFold.
Protein folding is not a neat business productivity problem. It is a scientific challenge that researchers had been trying to solve for decades. It was slow, complex and difficult to predict accurately.
DeepMind’s early results were impressive, but not perfect. Then the progress jumped. AlphaFold reached a level of accuracy that changed the field and contributed to the work that led to a Nobel Prize in Chemistry.
The real opportunity is not that AI can help us make more things more quickly. It is that these systems may help us solve problems we could not solve in the old way.
Don’t just build another app
That was the bridge back to the live build.
Yes, we were building a networking assistant in the room. And yes, it was exciting to see an idea move into something working so quickly.
But if all we take from that is “we can make apps faster now”, we miss the bigger point.
The interesting question is what becomes possible when people with real knowledge of a sector, a community or a problem can start turning that knowledge into working tools.
You do not need to know every technical step to see what is possible anymore.
That changes things for business owners, marketers, creatives and subject experts. People who might never have seen themselves as software builders can now start shaping digital products, workflows and services through dialogue.
That does not mean expertise disappears.
It means different expertise becomes more important.
The person who understands the problem, the user, the context and the opportunity suddenly has much more power in the development process.
At this point in the talk, Richard had the app mostly built, he was able to demo functionality that we had no idea what is was, it based on the rooms brief. We were around 45 minutes in, and we were starting to think about deploying it.
The why: Why should anyone care about your idea?
The fourth chapter was the “why”.
This was the turning point in the talk, because it moved the conversation away from the technology itself and back towards the people in the room.
If these tools keep improving, then the ability to build something will become less rare.
That does not mean technical skill disappears, but it does mean the value starts to move.
If anyone can build an app, a dashboard, a workflow, a content tool, a customer portal or a rough version of a product, then the harder question is no longer “can this be built?”
The harder question is:
Why should anyone care about what you build?
Jumps in tech
If we suspend disbelief for a moment and imagine that anyone in the room could now build something like Xero, where is the value?
It is not just in the existence of accounting software.
Xero replaced spreadsheets for many businesses. Before that, spreadsheets replaced calculators. Each leap changed the tool, but people did not stay excited about the tool forever.
We are not excited by washing machines because they replaced doing everything by hand. We value the job they do, then we move on.
If we only use AI to make another version of something that already exists, the novelty will fade quickly, or someone with a bigger audience will make it and flood the market.
The classic version of a unique selling proposition asks what makes you different from someone else.

USP 2.0: user need, specialism and perspective
So I invented a new version to frame what might be a useful lens for exploring ideas using AI.
It is the overlap of three things:
User need.
Specialism.
Perspective.
- User need is what real people need, struggle with, avoid, misunderstand or wish was easier.
- Specialism is what you have built over time. Your sector knowledge, experience, data, craft, relationships and understanding of how things really work.
- Perspective is your purpose, values and vision for change. It is the way you see the world. It is what you care about enough to do differently.
I believe the strongest ideas sit in the middle of those three things, because if you build only from user need, someone else can probably copy it.
Your specialism and perspective give you a unique perspective, and that idea becomes much harder to replicate.
Not because the technology is impossible to copy.
But because the thinking behind it is yours.
What makes an idea hard to copy
This is where the Exeter Business Network example helped.
A generic networking app is not that interesting. Plenty of people could build a tool to list attendees, store profiles, suggest connections or send follow-up prompts.
But that was not really the brief. The more interesting question was what made this particular network valuable.
Helen and Russell, who run the network, bring something specific to the room. They make people feel warm and welcome. They listen. They connect people thoughtfully. They are not trying to build the biggest room for the sake of it. They are building a values-based community.
The job is not to replace the human part of networking. It is to supplement it.
A useful app for that network should not feel cold, transactional or like another platform people have to manage. It should extend what already works. It should help people understand who is in the room, make better introductions, remember useful conversations and feel more confident about following up afterwards.
The app becomes more valuable when it carries the perspective of the people and community behind it.
So the question I asked the room was:
“What do you see or understand that others in the room might not?”
That is the real opportunity. Not just “what can AI help us build?”
At that point, we checked in with Richard again. He gave us a demo of the admin area, and we deployed the app to a URL that the room could access through a QR code and login.
The app was not the point
The room had given us the brief. Richard had worked through the functionality. The app had taken shape on screen, and we were ready to deploy it so people could actually use it.
That was a satisfying moment, because live demos are always a little bit risky and the errors along the way were mostly typos on my slides!
A polished demo would have been easier. We could have built something in advance, hidden the messy middle, and presented the result as if it happened smoothly.
The point was that a room full of people could help shape an idea, watch it become something testable, and see how quickly the gap between a problem and a prototype is starting to close.
What the live build showed us
The live build showed that the way we make digital things is changing.
If it is easy, quick an inexpensive to supplement your customer experience with a better digital experience then you had best believe your competitors will be thinking of new ways to do this right now.
Or if you are bogged down in admin, then joining or replacing software systems has never been an easier way to get time back and save money.
Imagine how lean and innovative a startup could look in the hands of someone just out of University armed with AI, for any industry.
That is as much a threat as an opportunity, because most organisations have great ideas sitting around that never become anything.
They stay in meeting notes, wish lists, internal conversations or “one day” plans because the jump from idea to build feels too big.
What the live session showed is that this jump is getting smaller.
Why the cost of experimentation has changed
When prototyping is expensive, slow or technically difficult, organisations naturally become cautious. They spend longer trying to predict the right answer before they test anything.
That can make sense, but it can also trap good ideas before they have had a chance to breathe.
If AI-assisted tools make it easier to create a rough working version, then organisations can learn earlier. They can see what users respond to. They can find the weak points. They can test whether the idea has any life in it before turning it into a larger project.
What this means for your organisation
For most organisations, the practical question being asked is “how do we use AI?”
That question is too broad. It usually leads to a messy list of tools, experiments and half-formed ideas. Some useful, some distracting, some just there because everyone feels like they should be doing something to feel on the curve.
A better question might be, what do we understand that could become more useful if we turned it into a tool, process or digital experience?
That is a much more interesting place to start.
Because most organisations are sitting on valuable knowledge already. It might be in the heads of experienced staff. It might be in customer conversations, internal processes, service delivery, sales calls, support tickets, research, spreadsheets, workshop notes or years of sector experience.
AI does not automatically make that knowledge useful.
But it can help you shape it into something more practical, more accessible and easier to test.
That might be an internal assistant that helps a team follow a better process. It might be a prototype for a new customer tool. It might be a way to help people make better decisions. It might be a digital layer that extends a service you already deliver well.
The opportunity is not to bolt AI onto everything.
The opportunity is to look carefully at where your knowledge, your customers’ needs and your perspective overlap.
Start with why as a north star
In a period of rapid change, your “why” becomes more important, not less.
Tools will change. Interfaces will change. The way we search, write, build, design and make decisions will keep shifting. Some of what feels impressive today will feel ordinary very quickly.
That is why purpose matters. Not as a soft brand statement. As a practical anchor.
When everything is moving quickly, your why helps you decide what not to build, where not to spend your time, and which opportunities are worth following. It gives you a way to judge the work beyond speed, novelty or efficiency.
This connects closely to how we think about sustainable digital transformation. It is not about tech perfection. It is about making better digital decisions, one practical step at a time.
Our toolkit frames this around aligning your digital presence with your values, including strategic purpose, responsible leadership, measured impact and narrative integrity.
That matters with AI because the temptation will be to chase capability. Can we automate this? Can we build this? Can we move faster?
At the point you have realised the answer is yes then the better question is:
Should we?
Your why gives you somewhere to come back to thsat will always be important to you. It helps you build things that support your mission, your customers, your team and the change you actually want to make.
Because in a world where more things become possible, where you place your attention becomes the advantage.
Build small, learn quickly, then decide
The next step is not to build the perfect thing.
It is to build the smallest useful version that can teach you something.
That is a healthier way to approach this kind of work. It takes some of the pressure off. You are not trying to predict the entire future of a product before anyone has used it. You are trying to learn whether the idea has value.
A prototype gives you something to react to. It turns a discussion into evidence. It helps you see where the idea is strong, where it is weak, and whether it deserves more investment.
Ready to explore what you could build next?
If you have an idea, a recurring process, or a customer journey that isnt getting traction, our digital innovation service can help you turn it into something testable.
With our Rapid Prototyping service we’ll help you shape the brief, find the real user need, prototype the right solution, and decide what is worth building properly.
You can even learn how to use the tools along the way as a training exercise for you or your team.
Want a chat about AI prototyping?
You probably have some questions, if you want a no obligation, quick 30 min chat about how this works then book a call with us.
Do you know anyone who may be interested in this?
Reuse this work
All our blog articles are shared under a Creative Commons Attribution licence. That means you’re free to copy, adapt, and share our words as long as you credit Vu Digital as the original author and link back to the source.
External Links
Our articles and data visualisations often draw on the work of many people and organisations, and may include links to external sources. If you’re citing this article, please also credit the original data sources where mentioned.
Join hundreds of others doing digital better together...
Our monthly newsletter shares marketing tips, content ideas, upcoming events, success stories, and a smile at the end. Perfect for digital pros looking to grow their impact.
"*" indicates required fields