The Supreme Court on July 13 took strong note of the Ministry of Information and Broadcastings decision to set up a social media hub for monitoring online data. It observed that such a move will be “like creating a surveillance state” and issued a notice to the central government on a plea by Mahua Moitra of the Trinamool Congress (TMC). Surveillance and data driven methods would lead to discrimination against people.
Social media giant Facebook stated in its biannual transparency report in May that the Indian government had requested the company to share data over 12,000 times in the second half of 2017. The Indian government is second in the global list — right after the US — of countries that sought the data of Facebook users from their respective countries.
Government Surveillance Programs
Many dimensions of surveillance are exploited by both government and private agencies. From CCTV cameras capturing our images routinely, government-provided IDs, our smart phones, credit cards, biometric recorders, etc., are all used for various surveillance purposes.
A major problem with surveillance is that it reduces identity to data packages like “where we live”, “what we consume”, “how we spend”, “where we surf”, etc. They ignore the human core of our multifaceted existence and create categories of administrative, organisational and business convenience.
The need for such varied, sometimes insensitive, and intense surveillance is justified by these agencies citing safety of citizens, speedy processing, control of anti-social elements and better coordination for governance. However, ensuring the safety of the data, the merit of the agencies involved in processing the data is often unconvincing. Even Mark Zuckerberg had to admit and apologise to a Congressional committee after the massive data leak, affecting the privacy of millions of FB users worldwide.
Along with worries over breaches of privacy, surveillance often brings mechanisms to discriminatively categorise people. As a country with a history of multiple forms of discrimination, India is at greater risk. People in liminal spaces, those culturally, legally, or politically marked as the “other”, are more likely to fear negative labelling, since it could limit their life chances.
Unconscious Discrimination
Four UN rapporteurs recently raised an alarm about whether the Bengali Muslim minorities in Assam might get discriminated against in the National Register of Citizens compilation process, raising anxiety over the data generated through surveillance mechanisms.
The coding and classifying mechanisms in many Big Data processes can consciously or unconsciously perpetuate such categories of socio-culturally prevalent existing prejudices and stereotypes. The post-September 2001 grip of Islamophobia is worth reflecting over. Narratives like “My Name is Khan” typify the problems of marking members of a specific community as “suspect”.
Aadhaar Surveillance
Surveillance came into the public sphere with colonisation. India is one of the earliest spaces were fingerprinting was used for pinning identities. The British employed it for two different purposes: To sign business contracts with the natives and to identify and notify criminals. Today, with Automatic Identification and Data Capture Technologies (AIDC) like Aadhaar, the state could use data for various purposes and monitor us in newer ways. Those who approve are more likely to get a positive, law-abiding identity than those without it.
Automation, standardisation and instrumentality involved in the process reduce negotiable spaces of social interaction and convert them into binaries, viz. accept or reject, legitimate or illegitimate, etc. This decreases the chances of the oppressed and excluded — who are more likely to be ignorant of the data-driven norms — by depriving them of negotiable spaces for empowerment. If credit worthiness of someone seeking a loan has to be determined by CIBIL, or Credit Information Bureau (India) Limited, people with less access to banking are more likely to lose.
So, the use of data generated through surveillance becomes not merely a matter of safety and privacy, but also one for the denial of social justice. While the rich capitalise on data-driven modes of transaction by virtue of their technological competence and affordability, the poor become victims of surveillance. Thus, the digital divide furthers the gulf between the rich and poor.
The Data Debate
Today, across the globe, debates on the use of data are largely a trade-off between national security, development and civil liberties, with the former two taking precedence. Production of searchable databases increases the exclusion of undesirable people. This varies according to changing circumstances and government policies. For example, the government’s logic of finding eligible beneficiaries for welfare programmes has changed with the increase in population and its diversity. Databases make it possible to quickly identify and establish new categories of convenience.
Data is neither devoid of the government’s policy goals nor a conspiracy in itself. Considering the challenges of governance and global shifts, data-driven approaches have come to stay. However, irrespective of the goals, the complex process of data assemblage leads to privacy concerns. The state’s potential to assign identities aggravates with surveillance and may lead to the loss of the individual’s agential power.
Data-based processing fosters superficial and surveillance-driven approaches. A healthy society must repose faith on the mutual trust between individuals and institutions. If a society leans heavily on such standardised Big Data processes, it is bound to compromise on the humane and democratic ethos.
It is the duty of civil society to educate and kindle the thought process of “ordinary citizens” to understand and resist the uncritical acceptance of drowning in data.
– By Padmakumar M.M. and Om Prakash L.T.
from Beebom https://ift.tt/2NBPvQL
0 coment�rios:
Post a Comment