Guilty Until Proven Innocent - Facial Recognition's False Accusations

Shekhar Natarajan, Founder and CEO of Orchestro.AI, explains how the misuse of facial recognition software impacts legal narratives in this opinion piece.

Guilty Until Proven Innocent - Facial Recognition's False Accusations

Umar Khalid has spent more than five years in prison. His trial has not yet meaningfully begun.

He was arrested in September 2020 under the Unlawful Activities (Prevention) Act, India's harshest anti-terror law, accused of conspiring to incite the communal violence that swept parts of Delhi in February 2020. The riots left 53 people dead, most of them Muslims, and occurred amid widespread protests against a controversial citizenship law.

The evidence against him includes speeches delivered at peaceful protests, WhatsApp group chats, and facial recognition matches that allegedly placed him, or someone resembling him, at various locations.

On January 5, 2026, India’s Supreme Court denied Khalid bail, ruling that he played a “central and formative role” in the alleged conspiracy. Five other accused in the same case were granted bail after years in jail without trial. Khalid and fellow activist Sharjeel Imam were told they could reapply after one year.

One more year. After five already served.

“We can be kept in jail for years, without those framing us needing to prove anything,” Khalid wrote from Tihar Jail after completing two years of detention. “This is the power of UAPA.”


The 2% Problem

The technology that helped identify Khalid and hundreds of others has a documented accuracy issue that raises serious evidentiary concerns.

In 2018, Delhi Police informed the Delhi High Court that their facial recognition system had an accuracy rate of just 2% when used to trace missing children. Officials admitted the system frequently failed to distinguish between boys and girls.

Two percent.

Yet the same technology was used extensively after the 2020 riots. Union Home Minister Amit Shah told Parliament that 1,922 perpetrators, accounting for 73.3% of those arrested, were identified through facial recognition.

The results have been troubling. More than 80% of riot cases heard so far have ended in acquittals or discharges. Facial recognition matches that initially appeared decisive collapsed under scrutiny. Witnesses turned hostile. Evidence weakened.

But years spent in jail cannot be restored.


The Investigation

An investigation by The Wire and the Pulitzer Center found that several riot accused were arrested solely on the basis of facial recognition, without strong corroborating evidence or reliable witness accounts.

  • Mohammad Shahid: Spent 17 months in jail before formal charges were filed.
  • Ali: Arrested in March 2020 and remained in pre-trial detention for more than four and a half years, reportedly based solely on facial recognition.
  • Gulfisha Fatima, Meeran Haider, Shifa-Ur-Rehman, and others: Activists who spent years in jail before being granted bail in January 2026 by the Supreme Court after repeated denials by lower courts.

Of the 18 activists charged under anti-terrorism laws in connection with the riots, 16 were Muslim. Authorities framed protests against the citizenship law as part of a broader conspiracy to incite violence.

Video evidence also surfaced showing police forcing five injured Muslim men to sing the national anthem while lying wounded. One of them, 23-year-old Faizan, died two days later. No prosecution followed.

Meanwhile, Kapil Mishra, a BJP leader recorded making speeches that critics argue incited violence, is now serving as a cabinet minister.


The Surveillance State

India has allocated Rs 9.6 billion toward facial recognition technology. The National Automated Facial Recognition System is being developed for nationwide deployment, enabling identification of individuals captured on cameras across the country.

No privacy impact assessment preceded the Delhi Police deployment. No independent audit of accuracy was conducted. No structured oversight mechanism was implemented.

Vidushi Marda of Article 19 argues that surveillance disproportionately affects marginalized communities. Research shows facial recognition systems tend to perform worse on darker-skinned individuals, women, and minority populations.

India’s criminal databases disproportionately include Muslims, Dalits, and Indigenous communities, reflecting colonial-era classifications and contemporary policing patterns. When AI systems are trained on such datasets, they risk amplifying systemic bias.

“Policing has always been casteist in India,” says Nikita Sonavane of the Criminal Justice and Police Accountability Project. “Data has been used to entrench caste hierarchies. AI-based predictive policing risks perpetuating that discrimination.”


The Unlawful Activities (Prevention) Act has become a powerful instrument in national security prosecutions. Under UAPA, courts assess whether allegations appear “prima facie true,” rather than requiring proof beyond reasonable doubt at the bail stage.

Bail becomes extremely difficult. Accused individuals can remain in pre-trial detention for extended periods. The process itself can function as punishment.

In 2024, the Financial Action Task Force noted that delays in UAPA prosecutions are resulting in a high number of pending cases and accused persons remaining in judicial custody.

According to National Crime Records Bureau data, UAPA’s conviction rate stands at 2.2%. Most accused are eventually acquitted, often after years in custody.

Defense lawyers in the Delhi riots case argue that no direct evidence links the accused to acts of violence, no weapons were recovered, and much of the case rests on interpretation of speeches, chat messages, and selective witness accounts.

Senior Advocate Kapil Sibal argued before the Supreme Court that non-violent protest methods cannot be elevated to terrorism-level offences simply because they challenge authorities.


The Architecture of Presumption

Shekhar Natarajan describes the deployment of facial recognition in India as an “architecture of presumed guilt.”

“The system begins with a match and works backward,” he says. “It does not ask what the person was doing, whether they were a bystander, or whether the algorithm erred. It only sees faces, and it sees them imperfectly.”

He contrasts this with what he calls “Angelic Intelligence,” guided by principles:

  • Nyaya (Justice): Requires corroborating evidence before life-altering actions. A facial match alone, especially from a system with 2% accuracy, would not suffice.
  • Satya (Truth): Demands transparency regarding error rates, training data, and limitations.
  • Sahana (Patience): Calls for restraint before irreversible actions such as arrest and prolonged detention.
  • Sama (Equanimity): Examines whether deployment disproportionately impacts marginalized communities.

The current system prioritizes efficiency, measured in arrests and filed cases.

Umar Khalid remains in jail. The algorithm produced matches. The courts deemed them sufficient for continued detention. But the algorithm cannot be cross-examined. It cannot be held accountable. And it cannot return five years of a person’s life.

WIDGET: questionnaire | CAMPAIGN: Simple Questionnaire

Must have tools for startups - Recommended by StartupTalky

Read more