ClassShield is a transparent, ethical content moderation prototype designed for educational environments. It combines high-performance machine learning with human oversight to protect students while upholding privacy and institutional trust.
ClassShield processes images through a linear, multi-layered safety pipeline:
- Layer 1 (ML Detection): Local NudeNet models and Sightengine cloud validation.
- Layer 2 (Contextual Scoring): RGB skin ratio analysis and keyword-based risk assessment.
- Layer 3 (AI Vision Context): Groq-powered Llama Vision analysis providing 360-degree situational context.
Administrators can customize safety thresholds on the fly.
- Block Thresholds: Adjust sensitivity for hard-blocking content.
- Review Thresholds: Set "Soft Flags" for human review without interrupting student workflows.
- Context Toggles: Enable/disable specific rules for beach context, swimwear, or lingerie patterns.
- Soft vs. Hard Flags: Clear separation between "Review Only" and "Blocked" content to reduce cognitive load.
- Privacy Heatmaps: Blurred risk zones that highlight concerns (Red for risk, Yellow for skin) without exposing admins to explicit content.
- Deterministic Caching: SHA-256 image hashing ensures identical images receive identical decisions, guaranteed by SQLite.
- No Auto-Deletion: Human verification is mandatory for all disciplinary actions.
- Privacy-First: Images are processed entirely in memory; only cryptographic hashes are stored for audit logs.
- Contextual Awareness: Explicitly labels neutral context (e.g., educational beach photos) to prevent false-positive frustration.
- Backend: Flask (Python 3.11)
- AI/ML: NudeNet (Local), Sightengine API, Groq (Llama-3.2-90b-vision)
- Database: SQLite (Policy & Decision Caching)
- Image Processing: OpenCV, PIL, NumPy
- Frontend: Bootstrap 5, Vanilla JavaScript
-
Install Dependencies:
pip install -r requirements.txt
-
Configure Secrets: Add the following to your environment/secrets:
SIGHTENGINE_API_USERSIGHTENGINE_API_SECRETGROQ_API_KEYADMIN_PASSWORD
-
Launch:
python main.py
Access the dashboard at
http://localhost:5000.
The web interface includes comprehensive guides:
/ethical-ai: 6-point core principle breakdown./bias-testing: Performance report across Fitzpatrick skin tones I-VI./education: Student-facing materials on safety and AI./submission: Judge-ready technical package.
ClassShield is founded and developed by Anvesh Raman
This project is licensed under the MIT License - see the LICENSE file for details.
Built for safety, driven by ethics, verified by humans.

