Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
AI cameras can now predict and stop crimes before they happen
Onfokus/Getty

AI cameras can now predict and stop crimes before they happen

We’re now living in “Minority Report.” Japan’s National Police are set to embark on a terrifying experiment involving using advanced AI-powered security cameras to pre-empt major crimes. These AI-enhanced cameras will specialize in machine-learning pattern recognition across three distinct categories: behavior detection for spotting suspicious conduct, object detection for identifying weapons, and intrusion detection to safeguard restricted areas.

This initiative is expected to roll out within this fiscal year — ending March 2024 — after the shocking assassination of former Japanese Prime Minister Shinzo Abe and the subsequent attempted attack on current Prime Minister Fumio Kishida. Such high-profile crimes, often committed by so-called "lone offenders" — individuals disassociated from society — have prompted Japan’s police to explore crime-prevention strategies.

Proponents assert that the AI’s "behavior detection" algorithm can learn by observing patterns indicative of suspicious activities, such as repetitive, anxious glances. Previous attempted AI-aided security has homed in on behaviors like restlessness and fidgeting, which may indicate unease or guilt. This is a worrying leap forward in what is possible for modern security agencies.

The Chinese model goes global

From ubiquitous police cameras on street corners to online monitoring and censorship, the Chinese population is constantly under surveillance. Now, a new generation of technology is delving into the vast pool of data gathered from daily activities, aiming to predict crimes and protests before they occur. However, these predictive systems aren’t just targeting those with a criminal record; they are also used to identify vulnerable groups, including ethnic minorities and individuals with a history of mental illness.

This cutting-edge technology relies on algorithms that sift through data, seeking patterns and deviations that could indicate potential threats. While these algorithms are anathema to many in the West, they’re being heralded as triumphs in China.

Reports detail instances where the technology flagged suspicious behavior, leading to investigations and uncovering fraud and pyramid schemes. However, these technologies extend far beyond surveillance. They are a powerful weapon for a society seeking to maintain near-total social control over the populace.

China’s focus on maintaining social stability is unwavering, and any perceived threat to it is aggressively silenced. Under the regime of President Xi Jinping, the security state has become more centralized, deploying technology to quell unrest, enforce strict COVID-19 lockdowns, and curb dissent. Unfortunately, China appears to be the model for leaders like Justin Trudeau and others.

Policing the homeland

Described as a groundbreaking innovation by Time magazine in 2011, predictive policing has been quietly rolled out across the U.S. Numerous police departments in the country are experimenting with predictive software, envisioning a future where law enforcement could foresee and thwart crimes before they unfold. Developers tout this technology as a means to eliminate human bias, enhance the precision of policing, and optimize resource allocation.

This approach gained momentum with substantial federal grants directed toward smart policing solutions. The LAPD, led by Police Chief William Bratton, spearheaded one of the initial trials in 2009 with $3 million in federal funding. The goal was to predict crime-prone areas and deploy officers pre-emptively to deter criminal activities. The involvement of respected figures like Bratton lent credibility to the technology, leading to its adoption by other departments nationwide. By 2014, a survey revealed that 38% of 200 surveyed departments were using predictive policing, and 70% were planning to implement it in the coming years.

Using data to find high-crime areas and deploy more resources is a reasonable use of data. However, with the rapid advancements of AI technology and the universal tracking of our devices, how long before the regime turns this pre-crime tech loose on the populace? We’re going to have to grapple with these questions because the AI is quickly reaching “Black Mirror” horror levels of surveillance.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Peter Gietl

Peter Gietl

Managing Editor, Return

Peter Gietl is the managing editor for Return. He is a tech journalist, magazine editor, and essayist covering human stories in the digital age, from crypto to AI to transhumanism. He lives in Colorado.
@petergietl →