AI Mistakes A Bag of Chips For A Gun
In a striking incident in Kenwood High School, Baltimore County, a high-school student was handcuffed and searched after a video-surveillance AI system flagged his snack bag as a weapon. The system in use by the district’s schools triggered an alert when it interpreted the shape and posture of the student holding a crumpled bag of chips—specifically a bag of Doritos—as a firearm. The alert prompted police to respond, handcuff the student, conduct a search, and eventually determine that no weapon was present. The school later offered counselling for the student and classmates who witnessed the incident.
The event underlines both the promise and the pitfalls of deploying artificial-intelligence systems for threat detection in educational settings. On the one hand, the technology is intended to enhance security by detecting weapons rapidly and alerting responders early. On the other hand, this case demonstrates how false positives—especially when using image-recognition or posture-based heuristics—can lead to serious consequences: trauma, suspicion, loss of trust, and disruption to students’ lives.
The student’s account highlights how ordinary behaviour—after football practice, eating a snack, sitting with friends—was suddenly reframed as a potential threat by the system. In the aftermath, school officials and district authorities expressed regret over the incident, acknowledging the distress caused and offering support services. The vendor, described in local reporting as the AI-based “gun-detection” system provider, did not publicly elaborate on how the error occurred or what remedial steps would be taken.
Broader questions emerge about the governance of such technologies in schools: What accuracy thresholds are acceptable? Who verifies alerts before law-enforcement is summoned? How are bias, privacy, and student rights weighed against safety? This incident may spur educational authorities, AI vendors and policymakers to review deployment protocols, calibration and oversight of automated surveillance systems.
Overall, this case is not only about one mis-identification—but about the larger tension between automated safety systems and human judgement, especially in high-stakes environments involving minors. It reminds us that technological systems, however sophisticated, are subject to error, misinterpretation and misclassification—especially when confronting real-world complexity, varied lighting, posture ambiguity and object variation (e.g., snack bags vs firearms).
Main Points
- A Baltimore County high-school student was handcuffed after a school-camera AI system mistakenly identified his bag of Doritos as a weapon.
- The district uses an AI-based gun-detection system that monitors school camera feeds and sends alerts on suspicious objects/behaviours.
- The student was searched, no weapon was found, and the school subsequently offered counselling support to those involved.
- The vendor/system provider is under pressure to explain how the error occurred; local reporting indicates the AI flagged the posture and shape of the snack bag.
- The event triggers broader concerns about false positives, reliance on AI in school safety, student rights, privacy and human oversight of automated alerts.
Future Projections
- Policy & oversight: School districts may review their deployment protocols for AI-based security systems, enforcing stricter human verification before police response, and setting stricter accuracy thresholds.
- Vendor liability & transparency: AI providers may face pressure to reveal performance metrics, error rates and explainability of detections—especially in child-safety contexts.
- Student-rights protections: Legal and advocacy groups may push for safeguards against “algorithmic policing” in schools, including clear appeal rights for students wrongly flagged.
- Technology refinement: AI systems may evolve to incorporate additional context (object recognition, student behaviour history, teacher input) to reduce false positives and improve discernment between objects like snack bags and weapons.
- Public trust & acceptance: Incidents like this may erode trust in automated systems, prompting schools to balance AI tools with visible human oversight, transparency and community dialogue.
- Broader implications for surveillance culture: The case amplifies debates over how much automated surveillance is appropriate in education and how to balance safety with student autonomy, dignity and rights.
References
- US student handcuffed after AI system apparently mistook bag of chips for gun, The Guardian (Oct 24 2025) – https://www.theguardian.com/us-news/2025/oct/24/baltimore-student-ai-gun-detection-system-doritos
- WBAL-TV: “’Just holding a Doritos bag’: Student handcuffed after AI system mistook bag of chips for weapon” (Oct 22 2025) – https://www.wbal.com/article/student-handcuffed-ai-system-mistook-bag-of-chips-for-weapon/69114601

