Why The Haarlem Declaration Matters: Championing AI for advancing inclusive, safe, and reliable digital media spaces
#DMIS24 #RWN
As a global community of digital media makers – including independent media organizations, journalists, digital content creators and civil society groups using digital media for social impact – we recognize that AI technologies, and generative AI in particular, is actively and rapidly influencing, shaping and transforming the digital media ecosystem. Its influence extends across all aspects of content creation, curation, recommendation, moderation, distribution, promotion, and audience analytics.
While AI technology offers the potential to make routine and labour-intensive digital media processes and functions more efficient, streamlined and cost-effective, its deployment also introduces significant risks. Already visible is the disruption of the digital media ecosystem through the proliferation of misinformation, an erosion of human oversight in editorial decision-making, and the displacement of media workers. As a result, there is an urgent need for a value-driven, responsible and ethics-based approach to integrating AI technologies across digital media work and organisations.
In lieu of the adoption of the Global Digital Compact (GDC) by the United Nations Member States in September 2024, at the Summit of the Future – which outlines as part of its broader key objectives to, “fostering an inclusive, open, safe, and secure digital space that respects, protects and promotes human rights; advancing responsible, equitable and interoperable data governance approaches; and enhancing international governance of artificial intelligence for the benefit of humanity”, it is imperative that we, as digital media practitioners, take a proactive stance. We must commit towards using this powerful technology, in its application(s) to digital media, for advancing public good, and in service of the public interest.
We, therefore, under the ambit of Haarlem Declaration 2024, commit to deploying and utilizing all forms of AI-powered tools and technologies, in relation to digital media, in accordance with the following values and principles:
- Ensuring transparency and explainability
- Human Accountability: Maintain transparency about how AI contributes to tasks and decision-making processes across individual, programmatic, and organizational levels1.
- Explainable tools: Prioritise the use of AI tools that provide clear, understandable explanations for their recommendations, data privacy settings and data consent management.
- Continuous Education: Educate ourselves and relevant stakeholders about AI functionalities, potential biases, and limitations to enable greater transparency and better inform our decisions
Examples in practice:
- We commit to conveying to our stakeholders and audiences how AI is utilized in content production, curation, distribution, and audience analytics practices.
- We commit to urge AI developers to prioritise explainability and accessibility in Ai systems designs, and we, as digital media practitioners, commit to sharing this information in an engaging and understandable way to promote public good and foster informed dialogue.
2. Promoting ethical data practices
- Protect personal Information: Ensure all AI applications comply with data protection laws and safeguard sensitive data
- Data Minimization: Limit data collection to what is necessary for specific, clearly defined purposes.
- Consent and agency: Obtain informed consent for data use, providing users with transparent, accessible privacy policies that clearly outline how data is collected stored, used, shared, and deleted. Enable users to exercise control and agency over their data with opt-in and opt-out options.
Examples in practice
- We commit to implementing AI tools that have clear terms and processes for data collection and use – including having an opt-in and opt-out options in place
- We commit to calling on AI developers to design tools and systems that prioritize data diversity and inclusivity, guided by frameworks of informed consent and privacy protections.
- We commit to openly communicating to our audiences and stakeholders what types of data we collect, the purpose behind the collection, and how we use this data, whether for content curation, data analytics, or other applications
3. Safeguarding Information Integrity & Content Authenticity
- Validate AI Outputs: Implement comprehensive fact-checking protocols for all content generated, curated, or recommended by AI systems to ensure its reliability
- Avoid and minimize misinformation: Establish safeguards in our processes to ensure that all AI-supported communications are factual, non-harmful, and support the
- Quality Control: Develop and adhere to content standard that prioritise accuracy, credibility, and contextual integrity before dissemination.
Examples in practice:
- We commit to upholding journalistic integrity and accuracy, ensuring that AI-generated content adheres to the same standards of truth as human-produced content.
- We commit to respecting the intellectual property rights of independent media and refrain from using our content to train their AI models without explicit permission or financial compensation.
- We commit to full transparency, when AI tools have been used to create or support digital content – be it text, audio, visual, captions, etc.) as well as for fundraising purposes (proposal development, positioning and visibility documents, etc.)
- We commit to upholding necessary measures and knowledge sharing to uphold the content provenance and authenticity;
4. Minimising Bias, Harm, and Discrimination in use of AI tools
- Assess AI Applications: Regularly evaluate AI tools to identify, mitigate, and address potential biases that could result in unjust or discriminatory outcomes.
- Promote Fairness: Use diverse datasets and inclusive algorithmic designs to ensure AI systems treat users equitably and prevent the perpetuation of harmful and discriminatory stereotypes.
- Human Oversight: Maintain active human supervision to monitor and correct biased AI outputs ensuring alignment with human rights principles
Examples in Practice:
- We consciously implement AI for content moderation in ways that uphold freedom of expression while effectively countering harmful content such as hate speech and disinformation.
- We commit to ensuring that AI-driven content moderation processes we use are transparent, accountable, and free from bias.
- We commit to using AI in content curation to promote a diversity of perspectives, providing relevant and inclusive content to our audiences while actively avoiding the reinforcement of echo chambers or perpetuate inequality.
- We commit to calling on AI developers to design AI systems that prioritize data diversity and inclusivity, ensuring that marginalized and underrepresented groups are meaningfully included in algorithmic decision-making
5. Centring people over technology
- Assess Impact on Roles: Evaluate how AI implementation may affect staff positions and responsibilities and ensure that any transformation is managed with a people-first approach
- Support Staff Transition: Provide training, upskilling, and development opportunities to adapt to new technologies.
- Balance Automation and Human Touch: Use AI to enhance, not replace, human expertise, creativity, and interaction.
Examples in Practice:
- We commit to ensuring that human oversight remains a central feature in all AI processes, ensuring accountability and enabling intervention in cases of error or harm.
- We commit to ensuring that all AI systems are regularly audited and updated to meet evolving ethical standards and societal needs.
- We commit to using AI in content production to enhance creativity and efficiency while ensuring that human oversight remains integral to the editorial process.
- We commit to collaborative knowledge, learning and sharing, including with other media organizations, AI developers, and policymakers to share knowledge and best practices in AI use, including its impact on people who utilize these tools
- We call on governments and regulatory bodies to establish clear guidelines and frameworks that support the ethical use of AI in media, while safeguarding freedom of the press and human rights.
6. Balancing environmental impact of AI use
- Energy Consumption: Recognise the energy demands of AI applications, especially those requiring significant computational power, and actively seek to minimize their carbon footprint.
- Eco-Friendly Options: Opt for AI tools that prioritize reduction of environmental impact efficiency.
- Green Policies: Embed sustainability considerations into procurement and operational policies pertaining to deployment and use of AI tools
Examples in Practice:
- We commit to integrating a climate and environmental lens into our strategy and implementation processes to promote mindful use of AI technologies and applications in everyday work
- We commit to engaging in collaborative knowledge sharing with other media organizations and our audiences to raise awareness about the environmental impact of AI. This includes supporting and highlighting marginalized narratives and counter-narratives on this pressing and intersecting topic
Practical Commitments
To realise these principles and values in practice and action, we propose to strive towards implementing the following practical commitments:
- Learn, Share, Deliberate and Empower: as individuals and a collective, about using AI ethically and responsibly through open and accessible dialogue, learning circles, and lived experiences in relation to using AI in our everyday digital media work
- Shape & Amplify Diverse Narratives: Utilize the power of digital media to influence existing, and shape diverse, marginalised, and counter narratives about AI with our respective audiences, online and offline to inform opinions, decisions and actions
- Co-document our stories of AI practice: over time, for internal and external mutual reflection on the varying trajectories and outcomes resulting from application of AI in digital media work, including human experience of using AI technology
- Ethical AI Checklist: Operationalise ethical and responsible use of AI in digital media work by co-drafting and implementing an ethical AI checklist to regularly monitor, course-correct, promote and ensure ethical use of AI across various digital media tasks, processes and functions
- Produce Evidence-Backed Research of both positive and negative implications of AI in digital media – spanning areas of information integrity/disorder, displacement of media workers, propagation of bias resulting from use of AI in a digital media process/function, etc.
- Collectively Advocate with/towards our peers and stakeholders in (digital) media ecosystem, where relevant, on minimizing harms and promoting ethical and responsible use of AI in digital media and by digital media actors, including donors and funders
We, the undersigned, believe that these actions – rooted in our shared values and principles -, will ensure that as digital media makers, we remain both alert and informed about existing and forthcoming changes in AI and technology (more broadly). Together, we will continue to co-create, enable, and sustain, inclusive, safe, and reliable digital media spaces for all.
Would you like to contribute to the Harleem Declaration or suggestions you would love to imput to drive the need for its application domesticated globally? Then drop a comment or send us a whatsapp message to: +23407044378317.
Comments