There’s no doubt the adoption and high-speed evolution of AI technology is impacting the tourism industry. Should we blindly embrace all of it, warts and all? How should we cautiously weigh the risks and benefits?
Here’s how we set the guardrails for our team at WOOF Media by way of our inaugural AI Policy. We’re sharing it here for you to consider and adapt to your organisation’s needs.
Earlier this year, artificial intelligence started to become one of those hot topics we couldn’t ignore. One of our clients was approached by a software vendor to install an AI service on their website. This service would profile every visitor and provide our client (and the vendor…) with their contact information, without prior consent. Holy Australian Privacy Act!
Meanwhile, we were debating internally where and when, or even if, it would be appropriate to use generative AI at all. If we generate an image for a social media post, are we diminishing the roles of our graphic designers? And was the training data for the AI tools obtained ethically? It would be madness for a creative team like ours not to have their intellectual property house in order. Would our clients expect a human to do their design work?
With input from everyone, we established version one of the WOOF Media Artificial Intelligence Policy. If it’s useful to your team, you’re welcome to borrow what you wish (unless you’re training an algorithm!). And, as a client, if there’s anything else you would expect to be addressed, please do let us know!
WOOF Media’s AI Policy
Change is constant. It is our duty to stay abreast of technological advances that could add value for WOOF Media and our clients. We recognise that AI is always evolving and will change how we work over the coming years. However, we are not naive about the risks and dangers of AI and intend to make thoughtful choices about its use.
We acknowledge that ‘AI’ is a broad term that applies to many types of technology, software, and uses. We will validate our decisions through a framework of principles and regularly review and update our approach and policy as the legal and technological landscape evolves.
Guiding Principles
These guiding principles inform how we approach the use of AI both internally and for client services:
- We respect and uphold the IP rights of artists, writers, and other knowledge workers in our use of AI. For example:
- We won’t use generative AI to create copy in the style of a published writer’s voice;
- We won’t use generative AI to create images using models that have been trained on unknown or unacknowledged creative works;
- We would use generative AI to create images if the source images are licensed and acknowledged creative works.
- We may use generative AI to enhance the work of our own internal roles but not replace someone else’s in-house skills. For example:
- If a design or illustration is required then we would not use AI to generate those images for public/client use without consulting our design subject matter expert.
- Conversely, our design subject matter expert may use AI to augment their skills – for example to create an internal moodboard for inspiration for a design or illustration.
- If copy is required we would not use AI to create copy for public/client use without first consulting with our internal copywriting subject matter expert.
- Conversely, our copywriting subject matter expert may use AI to generate copy for editing to suit public/client use while being fully responsible for fact-checking, tone of voice, etc.
- We may use existing AI-supported tools that provide automation and efficiencies to existing roles. For example (but not limited to):
- Xero uses predictive modelling to provide automated prompts and categorisation of financial data;
- Conferencing software that provides automated transcription and documented action items to reduce administration time for internal and client meetings (while ensuring such tools align with guiding principles regarding privacy and data collection – see #4).
- We do not use our client’s IP (eg client data, images or video) in Large Language Models or predictive modelling without prior extensive consultation and consent from the client and any applicable vendors.
- We respect Australia’s Privacy Laws and the (developing) Australian AI Ethics Framework when we are using or considering the use of AI tools.
Definitions
Further Reading
Training and Resources
This article was first published on woofmedia.com.au.