We have developed a clear framework for our approach to artificial intelligence (AI), ensuring we harness its power responsibly, ethically, and effectively. In short, proceed with caution.
Our team are empowered to learn about AI, experiment with its use and define its limitations with clear guard rails. We do this to aid their professional development and evolve the services we provide.
We see it as a tool to be used by an expert to act as a springboard to solve creative and strategic problems. Like all tools, it is only as useful as the person using it.
We’ve focused on two main areas so far – machine learning and generative AI but do so with caution as we explore the morality of AI content ownership and always assess the quality of AI-generated text and imagery.
We have a structured and transparent approach to experimentation with an audit trail, so we learn quickly individually and collectively to take advantage of all developments. We always share where, when and how we use it too both with each other and with our clients.
Calling all Velo clients: Want to know more?
We believe in sharing what we have learned (good and bad) and will happily present to your team.
Just contact your Client Partner to organise it.
Our AI policy includes:
our goals & principles
our safety barriers
Our policy is subject to continuous evaluation and adaptation to align with evolving AI technologies and ethical standards. One thing will always be the same – we remain committed to the responsible and innovative use of artificial intelligence in our work. You can read more about our experiences to date in Velo Voice.
If you’d like to talk about anything related to AI, or how we can help you with your efforts, please don’t hesitate to reach out to our team.
Matt Scutt, Executive Creative Director
On behalf of the Velo Senior Leadership team