This video discusses how police departments are using "predictive policing" tools to assign threat scores to individuals, even those with no criminal record (0:06-0:12). This system collects years of data on individuals' movements, associations, online activity, and calls for service near their homes, then uses algorithms to predict future behavior (0:59-1:08).
Key points covered:
- Predictive Policing Explained (1:55): It uses data and algorithms to forecast future crimes. There are two main types:
- Place-based predictive policing: Focuses on "hotspots" to direct police resources (2:05).
- Person-based predictive policing: AI algorithms assess individual risk for involvement in or victimization of crimes (2:17).
- Factors Contributing to Threat Scores (2:28): Scores can be based on factors unrelated to convictions, such as arrest history without conviction, proximity to past incidents, alleged associations, calls for service near one's address, social media signals, and patterns the algorithm deems risky (2:31-2:53).
- The Chicago "Heat List" Example (3:15): Chicago's Strategic Subject List, or "heat list," scored individuals based on gun violence risk, using arrest records, victimizations, and alleged affiliations. It flagged many law-abiding citizens, leading to increased police surveillance and harassment, creating a feedback loop that generated more data and further increased scrutiny (3:36-4:07).
- Legality and Bias (4:27): The video explains that current federal law often allows these invisible scores because the Supreme Court judges police actions based on the moment of the act, not the AI "nudge" that led to it (5:41-5:54). Police can use minor infractions as "pretext stops" to investigate individuals flagged as high-risk, without admitting the score was the real reason (6:33-7:11). The system also automates existing biases by learning from historical policing data from overpoliced areas, leading to continued targeting of those areas (4:58-5:31). Individuals have no right to challenge or appeal these scores because they are considered internal police tools (7:22-7:58).
- Broader Implications (8:31): The video warns that such data collection and scoring could extend beyond policing into housing, employment, and insurance, potentially leading to denials based on an AI-predicted "high risk" without transparency or recourse (9:35-9:54).
- How to Protect Your Data (9:55): The video provides a step-by-step guide to limit data collection:
- iPhone Settings (10:20): Turn off app tracking requests (10:24), use approximate location for most apps (10:34), turn off Apple personalized ads (10:50), stop sharing iPhone analytics (11:00), harden Safari tracking (11:15), and turn on Mail Privacy Protection (11:25).
- Android Settings (11:39): Delete your advertising ID (11:41), clamp down app permissions (11:51), use the privacy dashboard to audit apps (12:02), and turn off or limit device locations (12:17).
- Advanced Data Control (12:30): Turn off web and app activity (12:35), turn off Google Maps timeline (12:47), and turn off YouTube history for ad personalization (12:54).
- Additional Steps (13:32): Use "Hide My Email" for sign-ups (13:08), enable Private Relay on iCloud Plus (13:16), opt out of people search sites (13:36), opt out of pre-screened credit and insurance offers (13:44), freeze your credit (13:52), and opt out of data sharing/driving score programs in your vehicle's connected services (14:00).
No comments:
Post a Comment