fbpx

AI Policy

AI Policy

This policy outlines acceptable uses of AI within our company. It is designed to protect our reputation, assets, and intellectual property, while also enabling employees to explore AI safely and responsibly.

Ethical Use Guidelines

Transparency

We use AI to assist in some content development at our company. To ensure transparency, accountability, quality and privacy, we adhere to internal AI usage standards. These standards help us safeguard against biases, maintain data security, and uphold our commitment to ethical marketing practices. One of these standards is that AI should be used to assist in content creation, not fully automate it. We ensure that every piece of content we develop is shaped and reviewed by people who have an understanding of our audience and AI’s limitations.

Responsibility

We believe human oversight is crucial when leveraging the power of artificial intelligence. AI is a tool to augment our abilities, not replace human creativity and judgement. Therefore, it is our policy that any content generated by AI must be reviewed, edited and approved by a human before publication or use. There will always be a "human in the loop" to ensure quality control and align the final output with our brand voice and standards. All employees using AI tools will receive training on how to thoughtfully validate and refine AI-generated content.

Avoiding Bias

AI systems reflect the data they are trained on. We understand that unfortunately, societal biases can be unintentionally propagated through algorithms. At Brightspark, we are committed to ensuring our AI-generated content is inclusive, ethical and accessible for all audiences. That is why we take the following proactive steps to detect and mitigate bias risks:

  • When we use ChatGPT we have our custom instructions set to require diverse perspectives in answers.
  • All AI output is reviewed to identify any exclusion of or unfairness toward groups based on characteristics like age, gender, ethnicity, sexual orientation, or disability status.
  • Any detected biases are documented and the AI outputs refined appropriately before publication.

The bottom line - we welcome AI assisting us creatively, but we will not accept perpetuation of harm through biases. 

Privacy

Protecting confidential information is of the utmost importance to our company. We have strict data privacy protocols in place that apply to all aspects of our business - including AI usage.

Our AI policies and practices are designed to ensure:

  • Sensitive client, employee or company data is never used in AI systems without explicit permission.
  • AI tools have been evaluated by our security team before approval for use. AI tools have undergone rigorous evaluation by our security team before approval for use. Only authorised tools that comply with GDPR may be used for business purposes.
  • Client contracts may include specific clauses outlining our AI data practices and privacy commitments.
  • Any actual or potential breaches stemming from AI usage are escalated immediately by the IT team.

By integrating strong privacy provisions into our AI practices, we aim to harness these technologies while still upholding our duties around confidential data. 

Security

Our AI security practices include:

  • Rigorous vetting of any AI applications by our IT team before approval for work devices. Only authorised tools may be used. 
  • Ongoing cybersecurity training for all staff using AI - including how to spot phishing, social engineering, and other attacks targeting these systems.
  • Working closely with AI vendors to understand their evolving security protocols for protecting against threats like data poisoning, model extraction, and adversarial examples.
  • Conducting in-house audits and risk assessments of AI systems to catch any vulnerabilities and quickly patch them.
  • Monitoring emerging cyber risks associated with artificial intelligence and adjusting our infosec procedures accordingly.
  • Having an incident response plan in place in the unlikely event an AI-related breach occurs.

Ethical AI Usage

As an ethical company, we believe in using technology responsibly and deliberately for the benefit of people. This applies to our usage of artificial intelligence tools. In addition to the principles of transparency, accountability, privacy and security outlined above, we also pledge to:

  • Use AI to better serve customers and amplify human capabilities - not solely to cut costs or jobs.
  • Understand the limitations of AI 
  • Never use AI to mislead or manipulate customers.
  • Never use AI to impersonate anyone without their explicit permission.
  • Abide by relevant laws, regulations and industry practices governing ethical AI design and use.

Responsible Use Procedures:

To practically implement this policy, we will always follow these steps:

1. Always adhere to the approve set of AI tools (see below)

2. Understand the tool being used, how it works and its potential limitations.

3. Ensure that every new hire and existing employee reads and understands this policy.

4. Commit to updating knowledge and training at the same pace as the AI technology evolves.

Tool Selection

The following tools pass our scrutiny for security, privacy and GDPR compliance. These are the only tools to be used on company related tasks as of November 2023.
This list will be reviewed again in December 2023.

  • ChatGPT
  • Bing
  • Claude
  • Mid Journey
  • Jasper
  • Merlin 
  • AskYourPDF
  • Synthesia/ElevenLabs/HeyJen

The goal of this policy is not to restrict creativity, but to ensure that we use AI responsibly and ethically. Please contact us if you see anything that is not in line with this policy.