The heart of tech is coming to the heart of the Mediterranean. Join TNW in València this March 🇪🇸

This article was published on March 23, 2020

Pardon the Intrusion #13: Policing using AI

Pardon the Intrusion #13: Policing using AI

Subscribe to this bi-weekly newsletter here!

Welcome to the latest edition of Pardon The Intrusion, TNW’s bi-weekly newsletter in which we explore the wild world of security.

A few months ago, I wrote about how the Indian government dismissed fears of mass surveillance in response to concerns that its proposed facial recognition system lacks adequate oversight.

But as the country’s capital was gripped by communal violence last month, it appears that law enforcement agencies employed the tech to identify more than 1,100 individuals who were allegedly involved in riots and violent protests.

“We are using face recognition software to identify people behind the violence,” India’s home minister Amit Shah said. “We have also fed Aadhaar (personal identity numbers based on an individual’s biometric and demographic data) and driving license data into this software, which has identified 1,100 people. Out of these, 300 people came from [the north Indian state of] Uttar Pradesh to carry out violence.”

This is not the first time the tech has been adopted in India, though. It’s been employed by police forces during parades, and once at a political rally earlier this year to screen crowds. The Delhi police force uses facial recognition software called AI Vision to identify suspects in real-time.

What’s more, police in Uttar Pradesh used the technology — called Police Artificial Intelligence System (PAIS) developed by Indian startup Staqu — during protests against a controversial citizenship law that critics say marginalizes Muslims.

Although this admission is huge, here’s the problem: From a legal point of view, India currently lacks comprehensive regulations that spell out responsible uses of such technology. Even worse is the lack of consent that stems from sharing Aadhaar data with law enforcement.

As the government works towards creating a nationwide database to match images captured from CCTV cameras with existing databases, the need for proper oversight is a must to protect individual privacy and prevent innocent people from being arrested.


Do you have a burning cybersecurity question, or a privacy problem you need help with? Drop them in an email to me, and I’ll discuss it in the next newsletter! Now, onto more security news.

What’s trending in security?

It was only time before hackers learned how to exploit the Coronavirus pandemic to distribute malware. In the past two weeks, more bad apps were booted from Apple and Google’s app stores, and T-Mobile, Virgin Media, Uber, Walgreens and anonymous social media app Whisper suffered data leaks.

  • Be safe online and offline. As coronavirus becomes a pandemic, baddies are taking advantage of the situation by spreading malware disguised as a “Coronavirus map” that activates an information stealer called “AZORult.” [TNW via Reason Cybersecurity]
  • Your VPN and adblocker apps could be leaking your internet traffic passing through the phone, courtesy app analytics firm Sensor Tower. But the company said it “only collects anonymized usage and analytics data.” [Buzzfeed News]
  • The crypto wars are back again: US lawmakers are pushing forward with the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (aka EARN IT) that aims to enforce standards to protect children from sexual exploitation online, but at the cost of data privacy. Match Group, which owns dating apps like Match, Tinder, OkCupid, and Hinge, said it will support the act. [CNET / Match Group]
  • More cases of bad apps: Banjo, an AI-powered surveillance insights firm, used a shadow company to push benign-looking Android and iOS apps that secretly scraped users’ social media accounts. In a similar case, Clean Master, an Android security app with 1 billion downloads, was pulled from the Google Play Store after it was found recording users web browsing activities. Avast was caught not long ago pulling the same thing. What’s more, attackers are making user of hidden apps to get malware on mobile devices. [Motherboard / Forbes / TechRepublic]
  • Do you own a Samsung phone and have a Samsung account? It’s turning on mandatory 2FA for all new logins after disclosing a “small” data breach that affected a handful of customers. It’s, however, SMS based. At this point, there is no excuse for not enforcing 2FA for every account you value. [Forbes]
  • LGBTQ dating app Grindr has been sold by its Chinese owner Kunlun to investor firm San Vicente Acquisition for $608.5 million after a US government committee expressed national security concerns that Kunlun’s ownership of Grindr was a national security risk. [The Financial Times]
  • Bad passwords are a thing within the CIA too. And the password for its top-secret hacking tools? “123ABCdef” [The Register]

  • Google location data turned an innocent biker into a burglary suspect just because he had passed the victim’s house three times within an hour. [NBC News]
  • Researchers detail how Android apps can steal one-time 2FA codes from Google Authenticator by taking screenshots — a flaw that was first disclosed in 2014. ThreatFabric discovered “Cerberus” to be the first-ever Android malware that was exploiting this technique to steal 2FA codes from the authenticator app. [ThreatFabric / Nightwatch Cybersecurity]
  • Consumer watchdog Which? has calculated that two in five Android devices are no longer receiving vital security updates from Google, putting them at greater risk of malware or other security flaws. [Which?]
  • Freshly published research uncovered multiple flaws in Intel and AMD CPUs that could expose sensitive data, inject arbitrary code (called Load Value Injection) and compromise the security features. While AMD downplayed the threat, Intel released a patch to address the LVI vulnerability.  [Positive Technologies / The Hacker News / Intel]
  • As malware authors race to develop more stealthy tools, Patrick Wardle, a former hacker for the National Security Agency, demonstrated how easy it is to steal and then re-purpose a rival’s code. [Ars Technica]

  • A penetration tester wanted to test the defenses of a South Dakota correctional facility in 2014 and his mom volunteered for the job. She not only managed to fake her way in, but also plugged malicious USB sticks into prison computers, giving him remote access to the systems. [WIRED]
  • Here’s a new tool that lets you open any email attachments without any fear of malware. What’s more, it’s open-source. [Dangerzone]
  • Researchers found problems in how Toyota, Hyundai, and Kia handle encryption in car immobilizers, allowing an attacker to remotely start the engine and then drive away. [WIRED]
  • An old story, but still relevant given the spate of ransomware attacks. “Like a man going through customs with cocaine trickling out of his pants leg”, Bloomberg’s Drake Bennett managed to sabotage his editor with ransomware he found on the dark web. [Bloomberg]
  • Microsoft along with partners across 35 countries took down Necurs, one of the most prolific spam and malware botnets known to date that’s believed to have infected more than nine million computers worldwide. [Microsoft]
  • The past two weeks in data breaches and leaks: Clearview AI (yes, that controversial facial recognition startup), T-Mobile, Uber, Virgin Media, Visser, Walgreens, and Whisper.

Data Point

Did you know hacking victims are uncovering cyberattacks faster? We probably have GDPR to thank for that. According to FireEye Mandiant M-Trends 2020 Report, organizations have gotten better at finding and containing attackers faster.

The global median dwell time, which is calculated as the number of days an attacker is present in a victim network before they are detected, has gone down from 416 days in 2011 to 56 days in 2019. In the European Union, the median dwell time fell from 177 days in 2018 to just 54 days — a 77% decrease. Also of note: more victims are being notified by an external party, rather than the organization identifying the security incident on its own.

GDPR regulations mandate that affected organizations report the breach to the relevant data protection authority within 72 hours of the incident coming to light.

Takeaway: Data breaches are unfortunately becoming a part of life in the 21st century. It only means that companies need to take security seriously and invest more in strengthening their cybersecurity defenses.

“Security effectiveness validation using purple team and red team exercises is one of the best ways for organizations to evaluate and test their security,” FireEye said. “By going up against real-world attackers, security teams can assess their own ability to detect and respond to an active attacker scenario. Response readiness assessments and incident response tabletop exercises also help improve preparedness.”

That’s it. See you all in a couple of days. Stay safe!

Ravie x TNW (ravie[at]thenextweb[dot]com)

Also tagged with