The digital world can be a dangerous place, particularly in India with its increasingly tech-based economy. The actions and inactions of companies which facilitate our use of the internet — from internet search providers to search engines and social media platforms — can trigger a wide range of harms, including identity theft, privacy violations and misinformation.

Certain harms merit more attention from regulators. These are harms that directly impair, subvert or compromise Indian constitutional values such as freedom of speech and expression, equality and democracy.

A company’s capacity to trigger such harms is directly related to what they do and how big they are. For example, companies supplying internet access through mobile or broadband networks have a limited capacity to trigger misinformation and disinformation, given they don’t directly publish content, unlike Google, Facebook and X that do.

Threats to free speech and equity

There are various types of harms. There are cybercrimes such as identity theft, child pornography and copyright violations. Most of these fall under the category of private harms. These are addressed through criminal sanctions and usually policing is focused on individuals indulging in such crimes.

But regulators have also started to focus on platforms’ obligations to act in addressing such crimes, failing which criminal actions against them have been initiated.

The arrest of Pavel Durov, CEO of encrypted messaging app Telegram, is a clear example of this. Another kind of harm is anti-competitive practices where platforms which are also producers of goods and services may stop competing firms from entering the market. Indian authorities are trying to stop such behaviour by fining large platforms such as Google.

Yet another kind of harm relates to privacy violations, leading to individual and collective discrimination. Most social media platforms are free to use but get access to users’ data in exchange. 

This data is monetised by such platforms to all kinds of companies. It can then be used by public and private actors to make decisions that may result in price discrimination, denial of access to goods and services and credit, and loss of employment opportunities. 

Recently, in the UK, revoking a job offer based on a potential candidate’s social media posts was found discriminatory by the country’s employment tribunal. According to a study, personalisation based on user preferences by e-commerce platforms has resulted in widespread price discrimination for users.

Hence, such discrimination classifies as a public harm. Finally, there are harms that include deliberate online falsehoods which misinform, misguide and incite negative social action including violence and censorship.

For instance, voters can be influenced through false information.

In 2019, the Internet and Mobile Association of India announced a voluntary code of ethics that platforms adopted to regulate online content. The code was developed in response to challenges highlighted by the Election Commission of India including maintaining transparency in political advertisements.

It was adhered to during the general elections held earlier this year. 

Making online platforms accountable

Many of these problems stem from digital platforms having long operated on the legal principle of ‘safe harbour’. This means they are not liable for actions triggered by their users.

For instance, if Google’s Chrome browser was used to illegally access copyrighted material or X was used to issue hoax bomb threats, both Google and X would be exempt from liability. This is applicable when the platforms have not initiated the transmission, selected the receiver or modified content.

However, this exemption requires that platforms undertake due diligence as prescribed by law, including responding to courts or enforcement agencies swiftly.

Here, the logic is that the architecture of the platforms does not allow for content to be checked before it is published, unlike what occurs with a traditional publisher.

However, given that platforms such as X and Facebook profit from the interactions of all their users, including those whose actions cause harm, they should share liability too.

Some harms can be linked directly to the platforms and the liability should lie solely with them. For instance, if Google prioritises its own apps over those developed by others, it creates entry barriers for competitors and stifles competition.

Lawmakers are realising this and pushing for legal accountability. In India, the rules regulating information technology have expanded due diligence obligations for social media platforms. Platforms such as Facebook with large user bases have additional legal obligations including appointing a chief compliance officer and publishing compliance reports.

Constitutional harms

It is difficult to regulate content prior to publication, however, platforms can still regulate content afterwards. Mostly, they choose not to. This is because falsehoods spread faster, capture user attention and hence drive engagement.

Platforms may also abuse their regulatory capacity to curb online speech and expression. This is usually done at the behest of the government.

Recently, the Bombay High Court prevented such an attempt by striking down an amendment to the law that would result in the establishment of a government fact check unit to identify “false, fake and misleading” information about the “business of the government”.

If the amendment had passed, the government would have had powers to compel platforms to remove information that it found inconvenient.

The amendment was declared unconstitutional on the grounds that it would lead to censorship and have a chilling effect on free speech. This is why such harms need to be categorised as constitutional harms — as they directly impair, subvert and compromise constitutional values such as the right to freedom of speech.

It’s not the first time such an attempt has been made. Recently, Indians have witnessed an unprecedented expansion of the state’s regulatory powers purportedly to address harms such as cybercrimes and misinformation.
But the grounds for exercising such powers are too broad and may end up threatening free speech, consequently fuelling further constitutional harms.

Protecting user data and users

The conduct of platforms is critical to ensure constitutional values are protected in cyberspace, thus the services of these platforms ought to be regulated more robustly.

Ways of doing this could include requiring them to publicly disclose information such as internal company policies on prioritising search traffic based on advertisements, content moderation and blocking, and government requests for content moderation. 

There could also be an absolute prohibition on the collection of sensitive personal information by platforms, given it may lead to community-wide discrimination.

Governments could also refrain from passing laws that might lead to constitutional harms. To ensure this, the judiciary has to develop a standard of review to assess such laws. Currently, the standard of review is the proportionality test. 

This means that if the state takes an action that restricts a fundamental right, it must be balanced against the goal it seeks to achieve.

For instance, the benefits of a biometrics-enabled identity card for access to government subsidies should be weighed against the threat to privacy. This mechanism is inadequate, since it not only presumes the primacy of the state’s public policy objective, but also provides it a wide discretion in the choice of tools.

Nupur Chowdhury is an Assistant Professor of Law in the Centre for the Study of Law and Governance at Jawaharlal Nehru University in New Delhi. Her research interests include legal regulation of environmental and health risks, frugal innovations and constitutional values.

(Photo Credit: Serious online harms such as influencing voters through false information need to be relabelled as constitutional harms to help regulate them better. Al Jazeera English Credits Attribution-ShareAlike, Flickr)

cyber attackcyber securityDatafreedomIndiaInformation and Communications TechnologyInternetLawTechnology

Tweet this

Online platforms that operate in India need to be regulated to ensure free and fair elections and protect data privacy and free speech
How to better protect free speech in cyberspace