Introduction
We recognise that technology is a natural progression – from the stone tablet to the pencil, from the pen to the printing press, then the typewriter, the personal computer, and finally the mobile device.
Our role has always been about connecting people and teams with the digital tools they need to achieve their goals.
As a B Corp‑certified digital marketing agency, our policy reflects our core values by ensuring that AI is deployed in ways that prioritise people, creativity, and the planet.
We recognise that while AI is a powerful tool, it must be employed thoughtfully and transparently to prevent harm and maximise its positive impact.
Ethical AI Usage
At Vu, AI is embraced as a creative partner rather than a substitute for human ingenuity. We utilise AI to generate initial ideas and drafts that serve as a springboard for our brainstorming sessions, while all final decisions and creative refinements are made by our experienced team.
To ensure our AI applications do not perpetuate bias, compromise privacy, or erode trust, we rigorously review and select our tools. Regular audits are conducted to identify and eliminate potential biases, and we partner exclusively with vendors who adhere to recognised ethical guidelines and maintain transparent data practices.
Robust data protection measures – such as strong encryption and routine security assessments – are implemented to safeguard our data, and our AI-driven projects are continuously monitored to promptly address any issues.
We are equally committed to transparency with our clients. All project proposals and communications clearly disclose when and how AI tools are used, explaining their roles and benefits. Clients are provided with straightforward opt‑in and opt‑out options for generative AI tools, ensuring they have full control over their involvement.
We maintain an internal register of AI tools and their purposes, including a clear explanation of their limitations and the role of human oversight. Where relevant, stakeholders are informed about how decisions are made and given the opportunity to request a human review.
Environmental Impact
We deploy AI only when its benefits clearly justify its environmental cost. To ensure this, we regularly review and monitor the carbon footprint of our AI applications and set clear criteria that incorporate environmental considerations into every decision.
Environmental impact assessments are integrated into our AI decision-making processes, ensuring that each tool’s benefits substantially outweigh its carbon cost.
Furthermore, we evaluate and partner with suppliers based on their adherence to our environmental standards. We prioritise providers with BCorp certification, carbon‑neutral practices and ISO 14001 certification, ensuring that our choices reflect our commitment to sustainability.
Client Rights
Clients have the freedom to decide whether AI tools are used in their projects.
We provide clear explanations of AI use in every proposal, onboarding opt‑in and opt‑out options, along with detailed explanations of the benefits and limitations of AI in person. This approach enables clients to choose an AI-assisted solution or a fully human-led process based on their preferences.
We always prioritise our clients’ best interests.
Through thorough consultations, we gain an in‑depth understanding of each client’s unique objectives and challenges, enabling us to develop bespoke digital strategies and recommendations that align with their goals. Regular feedback sessions ensure that our advice evolves in step with their needs.
Our use of AI aligns with Vu Digital’s Privacy Policy and all applicable data protection regulations. AI tools are only deployed where necessary, with strict safeguards in place to minimise data collection, anonymise information where possible, and ensure transparency in AI-driven data processing. Clients are always informed when AI interacts with personal data and have the ability to opt out of AI-driven processing where applicable.
Social Responsibility
Our AI practices are firmly aligned with our commitment to creating a greener web and promoting responsible business practices. We integrate sustainability criteria into our AI tool selection process by prioritising energy‑efficient solutions and platforms that utilise renewable energy.
Regular environmental impact assessments ensure that our AI initiatives contribute positively to our sustainability goals. Additionally, we actively reduce our carbon footprint by minimising travel for offshore delivery and encouraging remote collaboration.
We also share our AI journey with the community. This involves organising webinars and workshops on ethical AI practices, regularly publishing blog posts, case studies, and white papers that detail our projects and lessons learned, and developing engaging educational content – such as video tutorials, podcasts, and training modules to help others use AI responsibly.
Internal Governance and Oversight
We invest in our people by scheduling regular interactive training sessions and workshops that focus on ethical AI practices and emerging regulations. Industry experts are invited to host webinars and in‑person sessions, and we maintain an internal discussion forum for sharing updates, news, and experiences. Periodic assessments and feedback mechanisms are integrated into our training programmes to ensure continuous improvement.
Our AI practices are reviewed annually. Updated versions of our AI policy are published on our website and distributed via our newsletter to keep clients and partners informed. A robust feedback system gathers insights from employees, clients, and partners, ensuring that our policy evolves in line with emerging needs.
AI Lifecycle Management
Every AI tool we use undergoes a defined lifecycle process: from ethical approval and pilot testing through to periodic re-evaluation and eventual decommissioning. We track changes in performance, accuracy, and alignment with our values and environmental standards. Monitoring also includes user feedback loops and reassessment of use cases to ensure continued appropriateness and proportionality.
Incident Response Plan for AI Systems
We recognise that AI can occasionally produce unexpected results or unintended consequences. Whether due to bias, inaccuracies, or security concerns, swift action is essential to minimise risk and maintain trust. Our incident response plan ensures AI-related issues are identified, assessed, and resolved efficiently.
We have clear reporting mechanisms for employees, clients, and stakeholders to flag concerns. Once reported, our team conducts a rapid assessment to determine the root cause and takes immediate corrective action where necessary, including disabling affected functions. AI decisions with potential negative consequences are escalated to a human reviewer, ensuring AI remains a supportive tool, not an unchecked decision-maker.
Where required, we communicate openly with affected clients or stakeholders, outlining the issue and resolution. By maintaining transparency and accountability, we ensure AI-driven processes remain ethical, responsible, and aligned with our values.
Redress and Contestability
In the event that a client or stakeholder disputes an AI-influenced outcome, a formal process is in place for them to request a human-led review. We are committed to ensuring that any decision significantly impacting rights, reputation, or resources is contestable and overseen by a qualified member of the Vu team.
Stakeholders will receive a clear explanation of the AI tool’s role in the outcome, the logic behind the decision, and the corrective actions available. This ensures our commitment to human-centric oversight and protects individuals against unchecked automation.
Stakeholder Feedback
As a valued stakeholder, your insight helps us ensure our use of AI remains transparent, responsible, and aligned with our values.
If you have concerns, suggestions or would like to share your experience with our AI tools, you can submit feedback at any time using the following link.
All feedback is reviewed by our internal team and may inform future updates to our AI practices and policy.
Example AI Tools
To ensure transparency and accountability, we maintain an internal AI Tools Register. This register documents all AI systems we use — including their purpose, capabilities, limitations, and how they’re monitored.
It helps us track ethical risks, environmental impact, and ensure each tool is reviewed regularly. The register supports our commitment to responsible AI use and provides a clear record of how and why each tool is in place.
Example…
ChatGPT
Date added: 31/3/25
Use case: content creation, gap analysis, research.
License: Commercial License
Stakeholders: Internal & External
If you are an existing client and would like to see the full register then just get in touch.
Contact Us
Got a question about this policy?
If you need to clarify anything then feel free to drop us a line and we get back to you as soon as we can.
Get In Touch“I’ve had an excellent experience working with Vu. They are highly professional, friendly, and a pleasure to work with.”
Mark Diacono, Otter Farm
Let’s Make a Start
