Effective Date: 21 April 2025
1. Purpose
This policy outlines our approach to federation and domain blocking within the Fediverse network, emphasizing user autonomy while maintaining necessary safeguards against harmful behavior.
2. Core Principles
We believe in empowering users to make their own content decisions. Users maintain full authority to block individual accounts and domains according to their preferences, rather than relying on service-wide domain blocks.
3. Defederation Criteria
Our instance will only implement service-wide domain blocks when servers meet one or both of the following criteria:
3.1 Technical Abuse
The server engages in disruptive technical behavior, including but not limited to:
- Distributed Denial of Service (DDoS) attacks
- Spam distribution
- Attempted circumvention of rate limits
- Attempted circumvention of user blocks, content mutes, or other user privacy settings
3.2 Harmful Content
The server/instance encourages distribution of, or refuses to restrict one or more of the following content types:
- Content that promotes hatred or toxicity
- Illegal or unethical material
- Child exploitation materials
- Extreme violence or gore
- Coordinated spam campaigns
4. External Block List Implementation
While we reference the Tier 0 Oliphant block list as a valuable resource for identifying problematic instances, we do not automatically implement its recommendations. Each listing undergoes internal review to ensure it meets our specific defederation criteria.
5. Policy Enforcement
Our administrative team conducts prompt evaluations when instances are reported or identified as potentially violating these criteria. While some situations may require careful consideration, our experience has shown that violations of these standards are typically clear and actionable.
6. Reporting Blocked Instances
To report an instance for review under this policy, please email admin@theatl.social with relevant details and documentation.
7. Content Screening
We use a third-party vendor, Cloudflare, to screen all images and videos received by theATL.social via federation for the presence of Child Sexual Abuse Material (CSAM). If a match is found, the image or video is blocked and both theATL.social and the National Center for Missing and Exploited Children (NCMEC) are notified.
For more information on the CSAM screening process, please review the Terms of Service.
8. Review and Updates
This policy is subject to periodic review and may be updated to address emerging challenges within the Fediverse ecosystem. We welcome community feedback and questions regarding these guidelines.
For questions or clarification about this policy, please contact the instance administrators.