policy-ai-content-ethics
AI Content Ethics Policy
Use when
- Produces a written AI Content Ethics Policy for a client — a one-to-two page compliance document stating how the agency and/or client uses AI tools in content creation, what is disclosed to audiences, what is prohibited, and what quality standards apply. Grounds the policy in five core ethical principles (transparency, fairness, nonmaleficence, accountability, privacy) and addresses emerging risks including data leakage, virtual influencer disclosure, filter bubbles, deepfakes, copyright uncertainty, and jailbreak attempts. Also provides the consultant with an internal ethics checklist and sector-specific guidance. Invoke when onboarding a new client, when a client asks how AI is used in their content, when operating in a regulated sector (health, finance, public sector, NGO/donor), or when preparing a credentials or proposal document that references AI-assisted production.
- Use this skill when it is the closest match to the requested deliverable or workflow.
Do not use when
- Do not use this skill for graphic design, video production, software development, or legal advice beyond the repository's stated scope.
- Do not use it when another skill in this repository is clearly more specific to the requested deliverable.
Workflow
- Collect the required inputs or source material before drafting, unless this skill explicitly generates the intake itself.
- Follow the section order and decision rules in this
SKILL.md; do not skip mandatory steps or required fields. - Review the draft against the quality criteria, then deliver the final output in markdown unless the skill specifies another format.
Anti-Patterns
- Do not invent client facts, performance data, budgets, or approvals that were not provided or clearly inferred from evidence.
- Do not skip required inputs, mandatory sections, or quality checks just to make the output shorter.
- Do not drift into out-of-scope work such as code implementation, design production, or unsupported legal conclusions.
Outputs
- A structured markdown document, plan, playbook, or strategy ready for client-facing or internal use.
References
- Use the inline instructions in this skill now. If a
references/directory is added later, treat its files as the deeper source material and keep thisSKILL.mdexecution-focused.
Required Inputs
Ask for all of the following before generating any output:
- Client business name and industry — the legal or trading name of the business and the sector it operates in (e.g., healthcare clinic, SACCO, NGO, retail brand, government agency).
- Country and city — defaults to Uganda/Kampala if not specified.
- Primary goal — what the client wants to achieve by having this policy (e.g., satisfy a donor requirement, protect brand reputation, formalise internal practice, respond to an audience enquiry).
- Client's AI awareness — is the client aware that AI-assisted tools are used in their content production? (Yes / No / Partial — some clients delegate fully to the agency and do not review the workflow.)
- Audience type — select the closest match: B2B professionals, general consumers, public sector, or NGO/donor audience. This affects disclosure language.
- Existing brand guidelines or ethics commitments — does the client have a brand manual, a code of conduct, a donor compliance framework, or any prior ethics policy in place? If yes, note key constraints.
- Publishing voice — does the client publish content under their own name and identity, or under a persona, brand voice, or anonymous channel?
- Regulatory environment — which of the following apply? (Select all that
apply.)
- Uganda Data Protection and Privacy Act 2019 (UDPPA)
- Kenya Office of the Data Protection Commissioner (ODPC)
- International donor requirements (USAID, EU, UN agencies, etc.)
- Regulated sector: financial services (CMA/BoU oversight)
- Regulated sector: health (Ministry of Health guidelines)
- Regulated sector: political content (NITA-U / Electoral Commission)
- None of the above / general commercial use
Section 1 — Why an AI Ethics Policy Matters
Generate three paragraphs using the following framing. Adapt language to the client's sector and audience type.
Paragraph 1 — Production risk. AI tools accelerate content production and reduce drafting costs, but they introduce specific risks: factual errors presented with false confidence, brand voice drift away from the client's authentic register, unintentional reproduction of copyrighted material, and outputs that reflect biases present in training data. Without a written policy, these risks are managed informally — meaning inconsistently.
Paragraph 2 — Audience trust in the EA context. In East Africa, professional and institutional audiences are increasingly sophisticated in detecting generic AI output. Undisclosed AI-generated content in health, finance, public sector, and NGO contexts creates institutional trust risk. Audiences who feel deceived — particularly B2B buyers, government partners, and international donors — do not simply disengage; they raise formal concerns. A policy signals that the organisation takes authorship, accuracy, and accountability seriously.
Paragraph 3 — Protection for all parties. A written AI Content Ethics Policy protects the client (by setting clear standards for what the agency produces), the agency (by defining what it will and will not do), and the audience (by ensuring human review stands between AI output and publication). It is a professional baseline, not a constraint on production speed.
The Five Ethical Principles
Apply these five principles throughout the policy. Cite Ltifi (2025) and Johnsen (2024) on first use.
| Principle | Definition | Practical application |
|---|---|---|
| Transparency | Disclose AI use honestly to clients and audiences | State which tools are used; label substantially AI-generated content |
| Fairness | Monitor AI outputs for bias and discriminatory framing | Review outputs for stereotyping; audit targeting logic quarterly |
| Nonmaleficence | Do no harm — do not use AI to deceive, manipulate, or demean | Prohibit fake testimonials, deepfakes, and psychological targeting |
| Accountability | Humans remain responsible for AI output at all times | Named reviewer signs off every published piece |
| Privacy | Protect personal data from AI tools and cloud systems | No PII entered into any AI prompt under any circumstances |
Section 2 — AI Content Ethics Policy Template
Generate the following policy document. Replace all bracketed placeholders with information gathered in the Required Inputs section. Where a regulatory option was not selected, omit that clause rather than leaving a placeholder.
[CLIENT BUSINESS NAME] AI Content Policy Effective date: [DD Month YYYY] Reviewed by: [Name, Title]
1. Purpose
This policy governs how [Business Name] uses artificial intelligence (AI) tools in the creation, editing, and distribution of content across social media, email, blogs, and marketing materials. It sets out what AI tools are used, how human oversight is applied, what is disclosed to audiences, and what uses are prohibited.
2. Tools in Use
[Business Name] uses the following AI-assisted tools in content production:
- [Tool 1, e.g., Claude (Anthropic)] — for drafting captions, blog posts, and email copy
- [Tool 2, e.g., Canva AI] — for visual content ideation and design suggestions
- [Tool 3, e.g., ChatGPT (OpenAI)] — for research and content ideation
Update this list whenever a new AI tool is introduced to the workflow.
3. What AI Does and Does Not Do
AI tools draft and suggest content. A human team member reviews, edits, and approves every piece of content before publication. AI-generated content is never published without human review. Final editorial responsibility rests with [Name/Team at Business Name].
4. Accuracy and Fact-Checking
All factual claims in AI-assisted content are verified by a human team member before publication. Statistics, health information, financial data, legal statements, and claims about specific individuals or organisations are subject to additional verification from primary sources. AI outputs are treated as first drafts, not final authorities.
5. Brand Voice and Authenticity
AI tools are briefed against [Business Name]'s brand guidelines and tone of voice. All AI output is edited to reflect the authentic voice, values, and perspective of [Business Name] and its team. Generic or templated-sounding output is rewritten before publication.
6. Disclosure
[Business Name] does not routinely label individual posts as AI-assisted, as AI tools function as drafting aids in the same way a template or spell-checker does. Where content is substantially AI-generated with minimal human editing, it will be labelled accordingly. [Business Name] will not use AI to misrepresent human authorship in contexts where human authorship is material — including authored opinion pieces, personal testimonials, attributed quotes, and donor narrative reports.
For thought leadership, opinion pieces, personal brand content, and donor narrative reports, apply a 'Proof of Human' signal — a visible marker or statement that a named human wrote or substantially shaped the content. In an AI-saturated market, authentic human authorship is a brand asset (Schaefer, 2025).
Where a virtual or AI-generated persona is used to represent the brand (e.g., an AI-generated brand ambassador or synthetic spokesperson), this must be clearly disclosed in every post. Non-disclosure of AI identity in influencer contexts is an emerging regulatory risk (Ltifi, 2025; see the Lil Miquela precedent).
7. Prohibited Uses
[Business Name] will not use AI tools to:
- Generate false testimonials, fake reviews, or fabricated customer or beneficiary stories
- Create deepfake images, synthetic voice, or video of any named public figure, brand spokesperson, competitor, or customer without their explicit written consent — the reputational and legal consequences of unsanctioned impersonation are severe
- Produce content that misrepresents the identity of a human author in a material way
- Generate content in regulated sectors (health, finance, legal) without review by a qualified professional
- Automate engagement through the purchase of followers, fake likes, or bot-driven interactions
- Reproduce copyrighted material in a way that constitutes infringement
- Claim copyright in AI-generated content that has had minimal human input; ownership of AI-generated creative work is legally uncertain in most jurisdictions — obtain legal advice before registering or licensing such work
- Generate political statements, manifestos, or candidate-attributed content without disclosure and legal review
- Deploy AI-driven personalisation in ways that create filter bubbles — reinforcing existing beliefs and limiting audience exposure to diverse perspectives; audit targeting logic quarterly to ensure content reaches beyond existing believers
8. Data and Privacy
AI tools are used in compliance with [the Uganda Data Protection and Privacy Act 2019 / the Kenya Data Protection Act 2019 / applicable legislation]. Customer data, personally identifiable information (PII), and confidential client or beneficiary information are not entered into AI prompts. Explicit consent must be obtained before customer data is used to train or brief AI tools; this consent is separate from general data collection consent under the Uganda Data Protection and Privacy Act 2019. Team members are trained on this requirement as part of onboarding.
Do not enter confidential business information, trade secrets, or proprietary strategy documents into AI prompts that use cloud-based models. Cloud AI processes all inputs on remote servers — treat AI chat interfaces as public-facing environments. In 2023, Samsung engineers inadvertently leaked source code and meeting notes via ChatGPT (Venkatesan and Lecinski, 2026).
9. Compliance and Review
This policy is reviewed annually or whenever a significant AI tool is added to the content workflow. Any team member who identifies a breach of this policy must report it to [Name/Title] within 24 hours. Questions about this policy should be directed to [contact name / email address].
Signed: _________________________________ Date: ________________
[Name, Title] [Business Name]
Section 2A — AI Attribution and Disclosure Standard
Source: Ching & Mothi (2025). The disclosure standard used in this policy requires specificity. "Made with AI" is insufficient. The agency standard is:
"AI-generated [specific element], art-directed and revised by [human team]."
Professional precedent: the band YACHT documented their AI-assisted album in specific liner notes identifying exactly which elements were AI-generated and which were human-executed. This level of attribution is the standard the agency applies and recommends to clients. Where disclosure is provided, it must be specific enough that an informed reader understands what the AI contributed and what the human contributed.
Section 2B — Intellectual Property and Copyright
Source: Ching & Mothi (2025, p.82). Add as a named clause in client policies for any client intending to register or commercially licence their content:
What the policy must state:
- AI-generated content without substantial human creative contribution may not qualify for copyright protection under UK, US, or EU law
- This agency ensures that every deliverable involving AI assistance also involves substantial human creative contribution — in the form of strategic direction, editorial revision, cultural adaptation, and brand voice application
- Before registering or licensing any AI-assisted creative work, the client must obtain legal advice from a qualified intellectual property solicitor
Include this clause in the policy when the client is a creative agency, publisher, music producer, or any business that commercialises content through licensing or registration. For general brand content, note in the production record that human contribution is documented per deliverable.
Section 2C — SynthID and AI Content Watermarking
For AI-generated audio and visual assets, tag original AI-generated files with persistent metadata or watermarks before any editing or compression.
- Audio: SynthID (Google/DeepMind) is the current standard for AI-generated audio — it embeds a watermark that survives compression and editing
- Images and video: Equivalent watermarking tools exist for AI-generated images and video content
- Production record requirement: Note in the project file which assets were AI-generated at source and confirm that watermarking was applied to the original file before editing or delivery to the client
Section 2D — Training Data Bias Risk Register
Add to the policy's risk register or prohibited uses: Named risk: Training Data Bias. AI-generated content depicting people, communities, or cultural practices must be reviewed for training data bias by a human reviewer with direct cultural knowledge. AI tools default to Western-centric, gender-stereotyped, and racially inaccurate representations because their training data was predominantly Western. This is not a setting that can be adjusted — it is the data the AI learned from.
For East African clients: This review is mandatory for all AI-generated imagery descriptions, people representations, and community references before client delivery. A reviewer without direct cultural knowledge of the community being depicted is not qualified to approve this content.
Examples on record: BuzzFeed's AI-generated travel images and DeepVogue's AI fashion tool both produced racially and culturally inaccurate depictions without flagging bias. These are the precedents this policy addresses.
Section 2E — EU AI Act Cross-Border Compliance Note
For international clients, donor organisations, or any client producing content for European audiences, add the following cross-border compliance note: EU AI Act obligations relevant to AI-assisted content production:
- Article 4 — Labelling obligation: AI-generated content distributed to EU audiences must carry appropriate labelling identifying it as AI-generated where this is not obvious to the recipient.
- Article 28b(4) — Human oversight mandate: High-risk AI systems must include human oversight provisions. For content production, this means documented human review and approval before publication.
This note applies when: the client distributes content to EU audiences; the client receives EU donor funding with content compliance requirements; or the client operates a cross-border business with EU-facing channels. For legal certainty in EU-facing contexts, obtain advice from a qualified solicitor familiar with the EU AI Act.
Section 2A — Additional Ethical Requirements
Algorithmic Bias in Personalisation (Ltifi, 2024): AI personalisation algorithms can inadvertently reinforce demographic stereotypes — showing certain product types only to certain segments, or systematically excluding groups from offers, creating discriminatory feedback loops. Require an audit of any AI personalisation tool for demographic fairness before deployment. The audit must assess whether the system treats comparable users differently based on gender, ethnicity, or age in ways that cannot be justified by legitimate business logic.
Non-Discrimination Clause: AI-generated advertising targeting must not use protected characteristics — gender, ethnicity, religion, or age — as primary targeting variables in ways that constitute discrimination. This applies to both inclusion targeting (showing content only to favoured groups) and exclusion targeting (hiding content from disfavoured groups). Cite GDPR Article 22 and Uganda's Data Protection and Privacy Act 2019 Section 25 when advising clients on compliant targeting practice.
Explainability Obligation (Johnsen, 2024, Ch.28): When AI drives a significant strategic recommendation — audience targeting decisions, content strategy pivots, or budget allocation — the agency has an obligation to explain the AI's reasoning in plain terms to the client. AI output presented without explanation is not acceptable professional practice. Document the basis for AI-informed decisions in the strategy or reporting record.
Continuous Monitoring Obligation (Johnsen, 2024, Ch.28): Ethical AI deployment is not a one-time review. Require quarterly bias audits and model drift reviews as standard practice for any client using AI personalisation or AI-driven targeting. AI models that performed fairly at deployment can develop bias as the distribution of their training data shifts — a model trained on historical data will reflect historical inequalities unless actively monitored and corrected.
East African Regulatory Alignment (Johnsen, 2024, Ch.28): For clients operating across multiple EA countries, note that national data protection frameworks vary in scope and enforcement: Uganda Data Protection and Privacy Act 2019, Kenya Data Protection Act 2019, and Tanzania's Electronic and Postal Communications Act have different definitions, rights, and penalties. Flag the national regulatory context explicitly before deploying any AI personalisation system for a cross-border client.
Data Minimisation Principle (Ltifi, 2024, Ch.2): AI personalisation systems should collect only the minimum data necessary for the task. Require clients to document their data minimisation rationale before implementing any AI personalisation or audience profiling system. Data minimisation is a legal requirement under the Uganda Data Protection and Privacy Act 2019 and Kenya Data Protection Act 2019, and a baseline ethical standard for responsible AI deployment.
Section 3 — Consultant's Internal AI Ethics Checklist
Apply this checklist before publishing any AI-assisted content for a client. Run it per piece of content, not per campaign.
- Every factual claim verified by a human against a primary or authoritative source
- No customer data, PII, or confidential client information entered into any AI prompt
- No confidential business information, trade secrets, or proprietary documents entered into any cloud-based AI tool
- Brand voice edit applied — the content sounds like the client, not like generic AI output
- No prohibited use engaged (fake review, deepfake, bot engagement, fabricated beneficiary story)
- Client has approved the content, or this content type is pre-approved per the signed content calendar
- If the client is in a regulated sector (health, finance, legal, political) — a qualified professional has reviewed the output
- Disclosure applied if the content is substantially AI-generated with minimal human editing
- If publishing under a personal name or attributed quote — confirm the named individual has reviewed and approved the text; apply Proof of Human signal for thought leadership and donor narrative content
- No attempt has been made to circumvent AI safety guidelines ('jailbreaking'); report any such attempt to [Name/Title] immediately (Venkatesan and Lecinski, 2026)
Section 4 — Sector-Specific Guidance
Apply the relevant subsection based on the client's industry. Include all applicable subsections when multiple regulated sectors overlap (e.g., an NGO running a health programme).
Health
Never publish AI-generated health advice without clinical review by a qualified health professional. Even general wellness content can cause harm if inaccurate — AI tools are not trained as medical authorities and do not distinguish between safe and harmful guidance. Always append: "This content is for informational purposes only and does not constitute medical advice. Consult a qualified health professional." Report all health content to the client's designated clinical reviewer before scheduling.
Finance
AI-generated financial projections, savings guidance, or investment commentary requires review by a licensed financial professional before publication. Uganda's Capital Markets Authority (CMA) and Bank of Uganda (BoU) have disclosure requirements for financial communications. Always append: "This content does not constitute financial advice. Consult a licensed financial adviser." Do not use AI to generate specific return figures, interest rate comparisons, or regulatory compliance statements.
NGO and Donor-Funded Organisations
Many international donors — including USAID, EU development funds, and UN agencies — have content verification requirements embedded in grant agreements. Review the grant agreement before using AI tools for donor-facing communications, reports, or beneficiary stories. Never fabricate or embellish beneficiary stories; this constitutes research fraud and can result in grant termination. Where a donor requires human-authored narrative, document that the final text was written or substantially rewritten by a named team member.
Political and Public Sector
Uganda's National Information Technology Authority (NITA-U) guidelines and the Electoral Commission's rules govern political and election-related content. Do not use AI to generate political statements, candidate profiles, manifestos, or content attributed to public officials without disclosure and legal review. Public sector clients should obtain sign-off from their communications or legal team before any AI-assisted content is published under an official channel.
Section 5 — East Africa-Specific Considerations
Apply the following contextual guidance for all Uganda and East Africa clients.
Uganda Data Protection and Privacy Act 2019 (UDPPA) Do not enter personal customer data into AI prompts. Names, phone numbers, National ID numbers, locations, transaction data, and health records all qualify as personal data under the UDPPA. Breach of this requirement exposes the agency and the client to regulatory sanction from the Personal Data Protection Office (PDPO). Store AI conversation logs securely and purge sensitive sessions promptly.
Audience trust East African professional and institutional audiences — government partners, B2B buyers, international donors, and formal sector consumers — are acutely sensitive to perceived inauthenticity. Over-reliance on generic AI output risks damaging brand credibility in markets where relationships and personal trust underpin commercial decisions. Apply a rigorous brand voice edit to every piece of AI-assisted content before publication.
Language and vernacular content AI tools produce more reliable output in English than in Luganda, Swahili, Runyankore, Acholi, or other regional languages. Human-written vernacular content is strongly preferred for community-facing and rural-audience communications. Where AI is used to draft vernacular text, require a fluent native-speaker review before publication — machine translation into East African languages introduces both linguistic errors and cultural missteps that damage trust.
Local context accuracy AI tools are trained predominantly on Western and global datasets. They frequently produce incorrect Uganda-specific facts: wrong prices, outdated regulations, inaccurate geography, and unfamiliar local institutions. Always verify EA-specific claims — market prices, regulatory body names, government programme titles, local statistics — against current Ugandan or East African primary sources before publication.
Quality Criteria
Output meets the standard for this skill when:
- The policy template is complete and contains no unfilled placeholders; all bracketed fields are populated with client-specific information gathered during the Required Inputs stage.
- The five ethical principles (transparency, fairness, nonmaleficence, accountability, privacy) are presented as a table and cited to Ltifi (2025) and Johnsen (2024).
- The prohibited uses list explicitly names fake testimonials, deepfakes, bot engagement, fabricated beneficiary stories, filter bubble risk, and copyright/ownership uncertainty.
- The Proof of Human signal and virtual influencer disclosure requirement are present in the Disclosure clause.
- The data and privacy clause prohibits both PII entry and confidential business information entry into cloud AI tools, and references the Samsung incident (Venkatesan and Lecinski, 2026).
- The human review requirement is stated explicitly in both the policy document and the consultant's checklist — AI output is never published without human approval.
- Sector-specific guidance covers at least health and finance with specific, actionable instructions; all sectors relevant to the client are included.
- The Uganda Data Protection and Privacy Act 2019 is named explicitly and the prohibition on entering PII into AI prompts is unambiguous.
- The consultant's internal checklist is actionable as a per-piece pre- publication review, not a one-time setup exercise, and includes data leakage and jailbreak awareness items.
- The entire document is written in British English with no American spellings (organisation, colour, behaviour, programme, recognise, analyse, etc.).
References
Consult the following skills where relevant:
playbook-ai-content-workflow/SKILL.md— the operational workflow for producing AI-assisted content; read this when setting up or auditing the client's production process.playbook-social-media-policy/SKILL.md— the broader social media policy framework; the AI Content Ethics Policy sits within or alongside this document.04-brand-voice-intake/SKILL.md— captures the brand voice, tone, and communication standards that AI tools must be briefed against before drafting client content.
Key citations used in this skill:
- Ching, J. and Mothi, N. (2025) — AI attribution/disclosure standard; IP and copyright guidance; SynthID watermarking; training data bias risk; EU AI Act Articles 4 and 28b(4).
- Johnsen, R. (2024) AI Ethics in Practice
- Ltifi, M. (2025) Artificial Intelligence and Social Media Marketing
- Schaefer, M. (2025) Belonging to the Brand
- Venkatesan, R. and Lecinski, J. (2026) The AI Marketing Canvas
More from peterbamuhigire/social-media-skills
meta-ai-tools-audit
Produces a structured evaluation of AI marketing tools for a specific client, mapped by function (content creation, SEO, social media management, email marketing, automation, analytics, paid advertising, influencer marketing) with East African market accessibility, cost, and capability ratings. Outputs a recommended AI tool stack calibrated to the client's budget profile in UGX. Invoke when a client asks which AI tools to adopt, wants to assess their current AI tool usage, needs to build an AI-powered martech stack, or is evaluating AI capabilities against their marketing goals.
3platform-instagram-visual-system
>
3caption-writer
Writes social media captions for any platform from a brief. Generates 3 variations — short, medium, and long — with a hashtag set for each. Invoke when the user says "write a caption", "write captions for", "I need post copy for", "draft some caption options", or when a content brief is provided and the user needs caption text. Also invoke when working through a content calendar and post copy is needed for specific items.
3playbook-instagram-dm-sales
>
3training-social-media-fundamentals
Generates a foundational social media training guide for clients and their teams who are completely new to social media marketing, or who have been posting without any strategic understanding. Invoke when the user says "write a social media basics guide", "create a beginner training document", "the client doesn't understand social media", "start-here training", or when a client needs to understand social media before any strategy or content work begins. Distinct from training-client-team (operational handover of an existing strategy) and training-diy-content (content creation for self-managing clients). This skill covers what social media is, how it works, and how to approach it intelligently — the conceptual foundation that makes all downstream strategy work land.
3training-ai-prompt-writing
Produces a practical training guide for client teams on prompt engineering for marketing tasks — covering the Alpha-Beta-Gamma-Delta-Epsilon prompt structure, 10 prompt components, 5 prompting approaches, and 7 copywriting frameworks with worked East African examples. Invoke when the user says "create a prompt writing training guide", "teach my team how to use AI for marketing", "write a prompt engineering workshop", "AI copywriting training for staff", or needs a structured training document for client employees who use AI tools to produce marketing content.
3