Digital self-determination

From Wikipedia, the free encyclopedia

Digital self-determination is a multidisciplinary concept derived from the legal concept of self-determination and applied to the digital sphere, to address the unique challenges to individual and collective agency and autonomy arising with increasing digitalization of many aspects of society and daily life.


There is no philosophically or legally agreed-upon concept of digital self-determination yet. Broadly speaking, the term describes the attempt to comprehensively project the pattern of human self-determination (as first explored in disciplines like philosophy and psychology, and in the law) into the digital age.

The concept has been included in an official document for the first time by ARCEP, the French Telecoms Regulator, in a section of its 2021 Report on the State of the Internet,[1] exploring the work on "Network Self-determination"[2] conducted by Professor Luca Belli.



The concept of self-determination relates to concepts of subjectivity, dignity, and autonomy in classic central-European philosophy and derived from Immanuel Kant's conception of freedom. Self-determination presupposes that human beings are entities capable of reason and responsibility for their own rationally chosen and justified actions (autonomy), and ought to be treated accordingly. In formulating his categorical imperative (kategorischer Imperativ), Kant suggested that humans, as a condition of their autonomy, must never be treated as a means to an end but as an end in itself. The pattern of self-determination similarly aims at enabling autonomous human beings to create, choose and pursue their own identity, action, and life choices without undue interference.


In psychology, the concept of self-determination is closely related to self-regulation and intrinsic motivation, i.e., engaging in a behavior or activity because it is inherently rewarding to do so, as opposed to being driven by external motivations or pressures, like monetary incentives, status, or fear. In this context, self-determination and intrinsic motivation are linked to feeling in control of one's choices and behavior and are considered necessary for psychological well-being. Self-determination theory (SDT), first introduced by psychologists Richard Ryan and Eduard Deci in the 1980s,[3][4] and further developed through the 1990s and 2000s, has been largely influential in shaping the concept of self-determination in the field of psychology. Ryan and Deci's SDT proposed that individuals' motivated behavior is characterized by three basic and universal needs: autonomy, competence, and relatedness.[5] Autonomy refers here to the need to feel free to decide one's course of action. Competence refers to the need to have the capacity and skills to undertake and complete motivated behavior in an effective manner. Finally, relatedness refers to the need to experience warm and caring social relationships and feel connected to others. According to SDT, all three needs must be fulfilled for optimal functioning and psychological well-being. However, other psychologists like Barry Schwartz have argued that if self-determination is taken to extremes, freedom of choice can turn into the "tyranny of choice".[6] In this view, having too much autonomy and too many choices over our course of action can be perceived as overwhelming, make our decisions more difficult, and ultimately lead to psychological distress rather than wellbeing.


Human rights[edit]

In international law, the right of a people to self-determination is commonly recognized as a ius cogens rule. Here, self-determination denotes that a people, based on respect for the principle of equal rights and fair equality of opportunity, have the right to freely choose their sovereignty, international political status, economic, social, and cultural development with no interference. In the framework of the United Nations, fundamental rights like self-determination are mainly defined in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social and Cultural Rights.

Informational self-determination in German law[edit]

The concept of informational self-determination (informational Selbstbestimmung), considered as a modern fundamental right which protects against unjustified data processing, has featured prominently in the German Federal Constitutional Court's (Bundesverfassungsgericht) jurisprudence and might be the most direct precursor and inspiration to the concept of digital self-determination.

In 1983, the Bundesverfassungsgericht ruled that "in the context of modern data processing, the general right of personality under Article 2.1 in conjunction with Article 1.1 of the Basic Law encompasses the protection of the individual against unlimited collection, storage, use and sharing of personal data. The fundamental right guarantees the authority conferred on the individual to, in principle, decide themselves on the disclosure and use of their personal data." (Volkszählungsurteil, headnote 1).

Philosophically, the right to informational self-determination is deeply rooted in the Bundesverfassungsgericht's understanding of inviolable Human Dignity (Article 1 of the Grundgesetz) as a prohibition of human objectification (in German: Objektformel; see for example n°33 of BVerfGE 27, 1 - Mikrozensus). This understanding refers back to the late 18th-century German philosophy of Enlightenment. The Volkszählungsurteil was inspired by the concern that modern data processing technology could lead to a "registration and cataloging of one's personality in a manner that is incompatible with human dignity" (Volkszählungsurteil, headnote 4). In this view, human beings, due to their inviolable dignity, may never be treated like depersonalized and objectified resources that can be harvested for data. Instead humans, due to their capacity for autonomy, are self-determined agents possessing a significant degree of control over their informational images.

Self-determination in the digital sphere[edit]

The increasing digitization of most aspects of society poses new challenges for the concept and realization of self-determination.[7] While the digital sphere has ushered in innovation and opened up new opportunities for self-expression and communication for individuals across the globe, its reach and benefits have not been evenly distributed, oftentimes deepening existing inequalities and power structures, commonly referred to as a digital divide. Moreover, the digital transformation has enabled, oftentimes unbeknownst to individuals, the mass collection, analysis, and harvesting of personal data by private companies and governments to infer individuals' information and preferences (e.g., by tracking browsing and shopping history), influence opinions and behavior (e.g., through filter bubbles and targeted advertisements), and/or to make decisions about them (e.g., approving or not a loan or employment application), thus posing new threats to individuals' privacy and autonomy.[8]

Although the definition of digital self-determination is still evolving, the term has been used to address humans' capacity (or lack thereof) to exercise self-determination in their existence in and usage of digital media, spaces, networks, and technologies, with the protection of the potential for human flourishing in the digital world as one of the chief concerns.[9]

Starting in the 2010s, a few multidisciplinary and cross-sectoral initiatives around the world have been working on developing a theoretical framework for the concept of digital self-determination.

In 2015, the Cologne Center for Ethics, Rights, Economics, and Social Sciences of Health, housed at the University of Cologne (CERES), conducted a study to help define digital self-determination and develop metrics to measure its fulfillment.[10] Their study report defines digital self-determination as "the concrete development of a human personality or the possibility of realizing one's own plans of action and decisions to act, insofar as this relates to the conscious use of digital media or is (co-)dependent on the existence or functioning of digital media".

In 2017, Professor Luca Belli presented at the United Nations Internet Governance Forum the concept of Network Self-determination as the "right to freely associate in order to define, in a democratic fashion, the design, development and management of network infrastructure as a common good, so that all individuals can freely seek, impart and receive information and innovation."[11] Arguing that the right to network self-determination finds its basis in the fundamental right to self-determination of peoples as well as in the right to informational self-determination, Belli posits that network self-determination plays a pivotal role allowing individuals to associate and join efforts to bridge digital divides in a bottom-up fashion, freely developing common infrastructure. The concept gained traction at the Latin American level, starting to form a core element of research and policy proposals dedicated to community networks.[12]

In 2018, the Swiss government launched a Digital Self-Determination network in response to the action plan for the Federal Council's 'Digital Switzerland' strategy, including representatives from the Swiss Federal Administration, academia, civil society, and the private sector.[13] The work of this network conceptualizes digital self-determination as "a way of enhancing trust into digital transformation while allowing all actors of society to benefit from the potential of the data economy". This work proposes that the core principles of digital self-determination are transparency and trust, control and self-determined data sharing, user-oriented data spaces, and decentralized data spaces that operate in proximity to citizens' needs. The work of the network aims "to create an international network that represents the basic principles of digital self-determination and on this basis will elaborate best practices, standards, and agreements to develop international data spaces".

In 2021, the French Telecoms Regulator (ARCEP) referred to the concept of Digital Self-determination in its official annual report dedicated to "The State of the Internet",[1] drawing on the IGF output document report on "The Value of Internet Openness in Times of Crisis".

In 2021, the Centre of AI & Data Governance at Singapore Management University launched a major research project focusing on the concept of digital self-determination, in collaboration with the Swiss government and other research partners.[14] Their theoretical framework[7] focuses on data governance and privacy, and proposes that the core components of digital self-determination are the empowerment of data subjects to oversee their sense of self in the digital sphere, their ability to govern their data, consent as a cornerstone of privacy and data protection, protection against data malfeasance, and accuracy and authenticity of the data collected. This proposed framework also emphasizes that digital self-determination refers to both individuals and collectives and that the concept should be understood in the context of "rights dependent on duties" and in parallel to concepts of a social or relational self, social responsibility, and digital solidarity (see below: 3.1. Addressing the multilevel 'self' in digital self-determination)

In 2021, the Digital Asia Hub in collaboration with the Berkman Klein Center at Harvard University and the Global Network of Internet & Society Centers, conducted a research sprint to explore the concept of digital self-determination from different perspectives and across cultural contexts. This initiative approached digital self-determination "as an enabler of - or at least contributor - to the exercise of autonomy and agency in the face of shrinking choices", to address questions of control, power, and equity "in a world that is increasingly constructed, mediated, and at times even dominated by digital technologies and digital media, including the underlying infrastructures."[15]

In addition to the work of governments and research centers, civil society members have also advocated for digital self-determination. For example, Ferdinand von Schirach, a legal attorney and widely-read German writer of fictional legal short stories and novels, has launched an initiative entitled "JEDER MENSCH", which translates to "Every human". In "JEDER MENSCH", von Schirach calls for the addition of six new fundamental rights to the Charter of Fundamental Rights of the European Union. Article 2 of this proposal is entitled "right to digital self-determination", and proposed that "Everyone has the right to digital self-determination. Excessive profiling or the manipulation of people is forbidden."[16]

In October 2021, an International Network on Digital Self-Determination [17] was created with the intention of "bringing together diverse perspectives from different fields around the world to study and design ways to engage in trustworthy data spaces and ensure human centric approaches.[18]" The network is composed of experts from the Directorate of International Law of the Swiss Federal Department of Foreign Affairs;[19] the Centre for Artificial Intelligence and Data Governance at Singapore Management University;[20] the Berkman Klein Center at Harvard University;[21] the Global Tech Policy Practice at the TUM School of Social Sciences and Technology [22] and The GovLab at New York University.[23]

Practical elements[edit]

Different sectors of society, ranging from legislators and policy-makers, to public organizations and scholars, to activists and members of the civil society, have called for digital infrastructure, tools and systems that protect and promote individuals' self-determination, including equal and free access, human-centered design, better privacy protections and control over data. These elements are closely connected and complement one another. For example, equal access to digital infrastructure can enable the representation of diverse viewpoints and participatory governance in the digital sphere, and decentralized systems might be necessary to ensure individuals' control over their data.

Access to digital infrastructure and tools[edit]

Bridging the various forms of existing digital divides and providing equitable and fair access to digital technologies and the internet has been proposed as crucial to ensure that all individuals are able to benefit from the digital age, including access to information, services, and advancement opportunities.[24][25]

In this sense, the concept of Digital Self-determination overlaps with the concept of "Network Self-determination"[26] as it emphases that groups of unconnected and scarcely connected individuals can regain control over digital infrastructures, by building them and shaping the governance framework that will organise them as a common good.[27] As such, Belli stresses that network self-determination leads to several positive externalities for the affected communities, preserving the Internet as an open, distributed, interoperable and generative network of networks.[2]

Digital literacy[edit]

Digital literacy and media literacy have been proposed as necessary for individuals to acquire the knowledge and skills to use digital tools as well as to critically assess the content they encounter online, create their own content, and understand the features and implications of the digital technology used on them as well as the technology they consciously and willingly engage with.[28] In addition to basic digital navigation skills and critical consumption of information, definitions of digital literacy have been extended to include an awareness of existing alternatives to the digital platforms and services used, understanding how personal data is handled, awareness of rights and existing legal protections, and of measures to independently protect one's security and privacy online (e.g., the adoption of obfuscation techniques as a way of evading and protesting digital surveillance[29]).

Representation of diverse realities and viewpoints[edit]

Internet activist Eli Pariser coined the term filter bubble to refer to the reduced availability of divergent opinions and realities that we encounter online as a consequence of personalization algorithms like personalized search and recommendation systems.[30] Filter bubbles have been suggested to facilitate a warped understanding of others' points of view and the world. Ensuring a wide representation of diverse realities on digital platforms could be a way of increasing exposure to conflicting viewpoints and avoiding intellectual isolation into informational bubbles.

Human-centered design of user interfaces and experiences[edit]

Scholars have coined the term attention economy to refer to the treatment of human attention as a scarce commodity in the context of ever-increasing amounts of information and products. In this view, the increasing competition for users' limited attention, especially when relying on advertising revenue models, creates a pressing goal for digital platforms to get as many people as possible to spend as much time and attention as possible using their product or service. In their quest for users' scarce attention, these platforms would be incentivized to exploit users' cognitive and emotional weaknesses, for example via constant notifications, dark patterns, forced multitasking, social comparison, and incendiary content.[8] Advocates of human-centered design in technology (or humane technology) propose that technology should refrain from such 'brain-hacking' practices, and instead should support users' agency over their time and attention as well as their overall wellbeing.[31]

Data governance[edit]

Scholar Shoshana Zuboff popularized the term surveillance capitalism to refer to the private sector's commodification of users' personal data for profit (e.g. via targeted advertising), leading to increased vulnerability to surveillance and exploitation. Surveillance capitalism relies on centralized data management models wherein private companies retain ownership and control over the users' data. To guard against the challenges to individuals' privacy and self-determination, various alternative data governance models have been recently proposed around the world, including trusts,[32] commons,[33] cooperative,[34] collaboratives,[35] fiduciaries,[36] and "pods".[37] These models have some overlap and share a common mission to give more control to individuals over their data and thus address the current power imbalances between data holders and data subjects.[38]

Current issues[edit]

Addressing the multilevel 'self' in digital self-determination[edit]

Digital self-determination for individuals[edit]

An individual's exercising of self-agency can be intimately connected to the digital environments one is embedded in, which can shape one's choice architecture, access to information and opportunities as well as exposure to harm and exploitation, thereby affecting the person's capacity to freely and autonomously conduct his or her life. A variety of digital technologies and their underlying infrastructure, regardless of their relatively visible or indirect human interfaces, could contribute to conditions that empower or disempower an individual's self-determination in the spheres of socio-economic participation, representation of cultural identity and political expression.

An illustration depicting the use of facial recognition technology on a woman

The extent of technologically-mediated spheres where such influence could take place over an individual's self-determined choices has been the focus of growing contemporary debates across diverse geographies. One of the debates concerns whether an individual's privacy, as a form of control over one's information,[39] may or may not be sufficiently protected from exploitative data harvesting and micro-targeting that can exert undue behavioural influence over the individual as part of a targeted group. Developments in this area vary greatly across countries and regions where there are different privacy frameworks and big data policies, such as the European Union's General Data Protection Regulation and China's Social Credit System,[40] which approach personal data distinctly.[41]

Other debates range from whether individual agency in decision-making can be undermined by predictive algorithms;[42] whether an individual labor, particularly in the Global South,[43] may encounter new employment opportunities as well as unique vulnerabilities in the digital economy; whether an individual's self-expression may be unduly and discriminately policed by surveillance technologies deployed in smart cities, particularly those integrating facial recognition and emotion recognition capabilities which run on biometric data, as a form of digital panopticon;[44] and whether an individual's access to diverse information may be affected by the digital divide and dominance of centralized online platforms, potentially limiting one's capacity to imagine his or her identity[45] and make informed decisions.

Digital self-determination for children[edit]

Digital media and technology afford children the opportunity to engage in various activities that support their development, learning, and pleasure time.[46][47] Such activities include play, interactions with others, sharing and creating content, and experimenting with varied forms of identities afforded by mediums they engage with .[48] At the same time, despite digital media affordances, children are users who are under 18 years old, which can have unintended consequences on how children consume content, be vulnerable, and ways interactions with technology impact the child's emotional, behavioral, and cognitive development. Therefore, calls within digital literacy and children technology interaction research assert that ethical design of technology is essential for designing equitable environments for children.[49] Work in digital media and learning acknowledges the affordances of technology for creating expansive ways for learning and development for children, at the same time, pays attention to that children should learn critical digital literacies that enables them to communicate, evaluate, and construct knowledge within digital media.[50] Additionally, ethical consideration should be taken into account to support children's self-determination.[51] For instance within this body of work, there is an attention to involving children in the decision making process of technology design as an ethical methodological approach in engaging the design of technology for children. In other words, involving children within the design process of technologies and thinking about ethical dimensions of children interactions enables a shift of the notion of vulnerability is shifted towards supporting children to enact their self-determination and positioning them as active creators of their own digital futures.

Beyond ethical considerations, children's involvement in digital technologies and digital market practices has also an important relevance with their privacy and data protection rights. Use of predictive analytics and tracking software systems can impact children's digital and real life choices by exploiting massive profiling practices. In fact, due to the ubiquitous use of these algorithmic systems at both state and private sector level, children's privacy can easily be violated and they can be personally identifiable in the digital sphere.[52]

Article 12 of the UN CRC implies a responsibility to states that children should have the right to form and express their own views "freely, without any pressure".[53] In the literal analysis, pressure refers to any kind of manipulation, oppression or exploitation. States parties should recognize that all children regardless of age are capable of forming and expressing their own autonomous opinions.[53] Also, it is stated by the Committee that children should have the right to be heard even if they don't have a comprehensive understanding of the subject matter affecting them.[53] Moreover, Article 3 of the UNCRC states that the best interest of the child shall be embedded in private and governmental decision making processes and shall be a primary consideration in relation to the services and procedures which involve children.[54] Anchoring these responsibilities to private and public digital practices and as it is highlighted in the General Comment No. 25 of the Committee on the Rights of the Child, children are at great risk in the digital domain regarding their vulnerable and evolving identity.[55] It turns out that with the proliferation of mass surveillance and predictive analytics, new disputes are on the way for states to protect children's very innate rights. To this end, recent class actions and regulation efforts Tech firms can be promising examples in the context of pushing the private sector to adopt more privacy-preserving practices on children which can provide a golden shield for their autonomy.[56][57] In this incautiously regulated atmosphere, it has become easier to make profit with the help of behavioral advertising against children.[58] Not having appropriate consent- inform/ parental consent practices, it is so easy to manipulate and exploit very intrinsic vulnerabilities of children and nudge them to choose specific products and applications. Regarding this, article 8 of the GDPR provides a set of age limits on the processing of personal data of children related to the information society services(ISS). Pursuant to Article 8, in conditions where children are at least 16 years old, the child can give consent to the lawful processing of personal data restricted with the purpose of processing (Art(6)(1)(a). For children under 16 years old, it is lawful only and to the extent of the consent which is given by the holder of the parental responsibility to the child. This age limit of 16 can be lower to the age of 13 by the Member States. In addition to this, it is emphasized that data controllers should take necessary measurements in relation to the protection of children's data. Supporting these, Recital 38 states that children merit specific protection on the use, collection and processing of their data taking into consideration that children are less aware of the impacts, consequences and safeguards with respect to processing of their personal data. The GDPR also refers to the children in Articles 40 and 57; Recitals 58 and 75.

Beyond the GDPR, one of the structured regulations is the UK's Information Commissioner's Office (ICO) Children Code (formally Age Appropriate Design Code) which is passed in September 2020.[59] The Children Code sets forth the age limit as 18 with regard to ability to give free while implying the responsibility to the providers of online services such as apps, games, connected devices and toys and new services. What differs The Children Code from the EU regulations is that it applies to all information society services which are likely to be accessed by children. This means, even if the service is not directly aimed at children, the parties that offer those services must comply with The Children Code. The ICO's Children Code is also infused with the notion of the best interest of the child that is laid out in the UNCRC.[60] Having a broad scope, the ICO lists a set of guiding points for organizations to support the notion of the best interest of the child such as recognizing that children has an evolving capacity to form their own views and giving due weight to those views, protecting their needs of the developing their own ideas and identity, their right to assembly and play.[60] The Code also extends the protection of the personal data of children with a set of key standards such as data minimisation, data protection impact assessments, age appropriate design, privacy by default and transparency.

Geopolitical and cultural power dynamics in the digital world[edit]

Digital Colonialism[edit]

The politics of the empire are already permeating the shared histories. Unequal social relations between colonizing and colonized peoples materialized through exploitation, segregation, epistemic violence, and so on. Throughout the world, these discourses of colonialism dominated peoples' perceptions and cultures. Post-colonial critics contended how colonized peoples could attain cultural, economical, and social agency against the oppressive structures and representation imposed on their lives and societies.[61]

However, through temporality, the preface "post" implies the historical period of colonization has ended, and the colonized subjects are now free of its discourses. Scholars have focalized on the continuity of colonialism even if it has historically ended. The neo-colonial structures and discourses are already part of the different "postcolonial" cultures.[62] The postcolonial era in which colonized countries have gained independence and autonomy has been a means for the populace to regain their own self-determination and freedom. Yet, the neo-colonial structures that are still rampant in the postcolonial societies. Although the nation-state might forward the idea of autonomy and self-determination, new forms of colonialism are always emerging. This dialectic between colonialism and self-determination encompasses a range of fields, changing in form, focus, and scope over time. It is reflected in the complex political and policy relationships between "postcolonial" peoples and the state, especially since most states are replicating the legal and political systems of their former colonizer.

History articulates that state policy in fields as diverse as health, education, housing, public works, employment, and justice had, and continue to have, negative effects on indigenous peoples after independence.[63] This negative effect is shared throughout the former colonized peoples. Alongside these political tensions, economic interests have manipulated legal and governance frameworks to extract value and resources from former colonized territories, often without adequate compensation or consultation to impacted individuals and communities.[64] Accordingly, Digital Colonialism emerges as a dominant discourse in the digital sphere.

Digital colonialism is a structural form of domination exercised through the centralized ownership and control of the three core pillars of the digital ecosystem: software, hardware, and network connectivity. The control over the latter three gives giant corporations an immense political, economic, and social power over not only individuals, but even nation-states.[65] Assimilation into the tech products, models, and ideologies of foreign powers constitutes a colonization of the internet age.[66]

Today, a new form of corporate colonization is taking place. Instead of the conquest of land, Big Tech corporations are colonizing digital technology. The following functions are dominated by a handful of multinational companies: search engines (Google); web browsers (Google Chrome); smartphone and tablet operating systems (Google Android, Apple iOS); desktop and laptop operating systems (Microsoft Windows); office software (Microsoft Office, Google Docs); cloud infrastructure and services (Amazon, Microsoft, Google, IBM); social networking platforms (Facebook, Twitter); transportation (Uber, Lyft); business networking (Microsoft LinkedIn); streaming video (Google YouTube, Netflix, Hulu); and online advertising (Google, Facebook) – among others. These include the five wealthiest corporations in the world, with a combined market cap exceeding $3 trillion.[67] If any nation-state integrates these Big Tech products into their society, these multinational corporations will obtain enormous power over their economy and create technological dependencies that will lead to perpetual resource extraction.[citation needed] This resembles the Colonial period in which the colonies were made to be dependent on the colonizer's economy for further exploitation.

Under digital colonialism, digital infrastructure in the Global South are engineered for the Big tech companies' needs, enabling economic and cultural domination while imposing privatized forms of governance.[68] To accomplish this task, major corporations design digital technology to ensure their own dominance over critical functions in the tech ecosystem. This allows them to accumulate profits from revenues derived from rent; to exercise control over the flow of information, social activities, and a plethora of other political, social, economic, and military functions which use their technologies.

Digital colonialism depends on code. In Code: And Other Laws of Cyberspace, Lawrence Lessig famously argued that computer code shapes the rules, norms, and behaviors of computer-mediated experiences. As a result, "code is law" in the sense that it has the power to usurp legal, institutional, and social norms impacting the political, economic, and cultural domains of society. This critical insight has been applied in fields like copyright, free speech regulation, Internet governance, blockchain, privacy, and even torts.[69] This is similar to architecture in physical space during colonialism. Building and infrastructures were built to reinforce the dominance and reach of colonialism.[70]

"Postcolonial" peoples, then, face multiple digital limitations in their access and use of the networked digital infrastructures. The latter threatens to reflect and restructure existing relations of social inequality grounded in colonialism and continuing processes of neo-colonialism. Indigenous peoples are acutely aware of this potential, and so are working with various partners to decolonize the digital sphere. They are undertaking a variety of projects that represent their diverse and localized experiences, alongside a common desire for self-determination.[citation needed] Rural and remote indigenous communities face persistent access problems to the digital associated with the historic and ongoing effects of colonialism. Remote indigenous communities are becoming 'offline by design' because their going online has been challenged.[71] Indigenous peoples are asserting their digital self-determination by using these platforms to build online communities, express virtual identities, and represent their culture virtually. Hence, they are no longer static as offline, but becoming 'networked individualism'.[72] Their engagement with the digital sphere resists the imposed representations of their identities and deterritorializes conceptions of virtual communities. Accordingly, the former colonized peoples are always engaged in the process of decolonizing the latent neo-/colonial discourses which are dominating the internet.

Digital Apartheid[edit]

Digital apartheid has also been a key concept in debates around digital self-determination. For authors such as Christian Fuchs, digital apartheid means that "certain groups and regions of the world are systematically excluded from cyberspace and the benefits that it can create."[73]

Brown and Czerniewicz (2010), drawing on a research project interrogating the access of higher education students in South Africa to Information and Communications Technology (ICT), highlight that while age or generational aspects have been a characteristic of digital divides, now the latter are rather a question of access and opportunity, claiming that in the present day "digital apartheid is alive and well."[74]

Borrowing from Graham (2011),[75] and extending to the representation of conditions surrounding higher education in post-apartheid South Africa, Ashton et al. (2018)[76] highlight the concept of digital apartheid as a multidimensional process with three dimensions - a material dimension (including access to infrastructure, device, cellular coverage, electricity), a skills dimension (including education legacy regarding computer training, social capital with regard to the family/community computer skills), and a virtual dimension (including language, culture and contextual relevance). The authors argue that "The virtual dimension emerges from the intentional act of 'digital redlining' which takes on a number of forms. It may be under the guise of protecting an organisation from spam and illicit, harmful cyber-attacks, but has the secondary outcome of blocking or filtering out communities who only have access through cheaper portals."[76] It also includes the influence of the Westernised, English internet that further influences content visibility. The skills dimension emerges from an understanding where ICT lessons were not a part of the curriculum until recently and therefore the skill development remained underexposed and restricted. The authors refer to the material dimension as the most cited concern regarding introducing technology as part of the curriculum, arguing that "the lack of power infrastructure in lower socio-economic areas and exorbitant data costs, impact some students' ability to access their learning resources."[76]

Since 2019, this concept signifying advantages to some and dispossession of some others has also been used to characterize internet shutdowns and communications blockades in Jammu and Kashmir. The region, contested and claimed by both India and Pakistan in its entirety and a site of an active armed conflict, witnessed the Indian State imposing a total communication blackout and internet shutdown in Jammu and Kashmir on the intervening night of 4 and 5 August 2019 as part of its unilateral measures to remove the semi-autonomous nature of the disputed territory of Jammu and Kashmir.[77] Low speed 2G internet was restored in January 2020[78] while high speed 4G internet was restored in February 2021.[79] A 2019 report notes that between 2012 and 2019, there have been 180 internet shutdowns in the region.[80] India also topped the list of 29 countries that had disrupted access to the internet for the people in the year 2020.[81] The report by Access Now highlighted, "India had instituted what had become a perpetual, punitive shutdown in Jammu and Kashmir beginning in August 2019. Residents in these states had previously experienced frequent periodic shutdowns, and in 2020 they were deprived of reliable, secure, open, and accessible internet on an ongoing basis."[81] In placing these frequent shutdowns in the context of the ongoing conflict in Kashmir, the report Kashmir's Internet Siege (2020) by the Jammu Kashmir Coalition of Civil Society argues that with the frequent internet shutdowns, the Indian government has been enacting in these regions a "digital apartheid," "a form of systemic and pervasive discriminatory treatment and collective punishment."[82] According to the report, "frequent and prolonged internet shutdowns enact a profound digital apartheid by systematically and structurally depriving the people of Kashmir of the means to participate in a highly networked and digitised world."[82]

This systematic censorship and deprivation not only resulted in excluding the people, collectively, from participating in cyberspace, but as was evident, it crippled IT companies and startups in Kashmir. It was noted to have affected at least a thousand employees working in this sector[83] just in the third month of the world's longest internet shutdown that began on the intervening night of 4 and 5 August 2019 across Jammu and Kashmir.[82] In a statement, UN Special Rapporteurs referred to the communication blackout as a collective punishment without any pretext of precipitating offence. "The shutdown of the internet and telecommunication networks, without justification from the Government, are inconsistent with the fundamental norms of necessity and proportionality," the experts said.[84] A news report quoting the story of an entrepreneur who had been doing well with a startup noted that the "Internet is the oxygen for start-ups. The Centre pulled that plug on August 5. The virtual world was our space for growth. Now that's gone. All employees and producers have been rendered jobless [..] I have to work by hook or by crook to meet the damage inflicted by loss of customers, undelivered orders and accumulated goods after the non-availability of Internet."[85] In June 2020, it was reported for the first time how non-local companies were able to bag a majority of contracts online for mining of mineral blocks, as locals were left at a disadvantage due to the ban on high speed internet.[86]

The effect of this digital apartheid was also witnessed during the lockdown induced by the Covid-19 pandemic leaving the healthcare infrastructure crippled as doctors complained about not being able to access information or attend trainings on coronavirus owing to the restricted internet. The president of the Doctors Association noted that the awareness drives that were carried out elsewhere about the virus were impossible to run in Kashmir. "We want to educate people through videos, which is not possible at 2G speed. We are handicapped in the absence of high speed internet."[87] Health experts and the locals warned that the internet blackout was hampering the fight against coronavirus in the region.[88] The internet shutdown also affected education across all levels in the region. News reports noted how Kashmiri education was left behind even as life elsewhere was moving online in dealing with the stay-at-home guidelines during the pandemic.[89] A news report after a year of the communication blackout and subsequent restriction on high-speed internet highlighted that it had "ravaged health, education, entrepreneurship" in the region.[90]

Regulating for digital self-determination[edit]

The legal landscape[edit]

Promoting concepts and rights which are closely related to digital self-determination is a common goal behind regulatory initiatives in various legal systems. Stemming from the conceptual framework of human rights, and a well-established notion of informational self-determination, digital self-determination gradually comes to play an increasingly important role as a concept that encompasses values and virtues which remain highly relevant in the context of the global network society, such as autonomy, dignity, and freedom.

The importance of embedding the fundamental values into the legislative frameworks regulating the digital sphere has been stressed numerous times by scholars,[91] public authorities, and representatives of various organizations.

The EU's legal policy, while not explicitly referencing a right to digital self-determination, pursues closely related objectives. One of the overarching premises of the European Digital Strategy is to encourage the development of trustworthy technology that "works for the people".[92] It aims at advancing, among other things, "human-centered digital public services and administration", as well as "ethical principles for human-centered algorithms".

The EU has outlined these policy goals in several regulatory agendas including i.a. the EU Commission Digital Strategy, the European Data Strategy, and the EU's White Paper on Artificial Intelligence. Subsequently, the EU has pursued the abovementioned objectives through the adoption or proposal of several legal instruments including:

  • The General Data Protection Regulation, aimed at laying down "rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data", protecting "fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data" and guaranteeing "the free movement of personal data within the Union". The main provisions relating to the concept of digital self-determination include principles of data processing (e.g. fairness, transparency, and accountability), grounds for legitimate data processing (notably consent and legitimate interests), rights of data subjects (e.g. the right to be informed, right to be forgotten, right to object), right to data portability, obligations associated with privacy-by-design and privacy-by-default, rights and obligations concerning algorithmic data processing (notably profiling and automated decision-making), and obligations concerning data transfers outside the European Economic Area.
  • The ePrivacy Regulation, a legislative proposal aimed at regulating the issues concerning electronic communications within the EU, including confidentiality of communications, privacy controls through electronic consent and browsers, and cookies.
  • The Digital Services Act, a legislative proposal aimed at harmonizing rules regarding digital intermediary services, most notably illegal content, transparent advertising, disinformation on social media platforms, and content recommending systems, while preserving freedom of expression. The DSA is one of two proposals of the Digital Services Act package.
  • The Digital Markets Act, a legislative proposal aimed at regulating the performance of the large online platforms acting as "gatekeepers" in the European Single Market, thus guaranteeing fair competition and "leveling the playing field". The DMA is one of two proposals of the Digital Services Act package.
  • The Regulation on Artificial Intelligence, a legislative proposal aimed at providing developers, deployers, and users with clear requirements and obligations regarding specific uses of AI. The draft regulation introduces i.a. a catalog of prohibited AI practices that distort the behavior of the individual in a manner that can lead to physical or mental harm.
  • The Data Governance Act and the Open Data Directive, legislative proposals aimed at creating trustworthy data-sharing systems which will empower the EU citizens to decide about sharing their data across sectors and the Member States, while increasing the annual economic value of data sharing in the EU and creating social and economic benefits.[93]
  • The Copyright Directive,[94] aiming to protect intellectual property and, consequently, intellectual work. However, it contains a difficult and controversial balance with another aspect of self-determination, which is freedom of speech (especially Article 17).
  • The Audiovisual Media Services Directive,[95] regulating the freedom of information in the domain of Audiovisual Media Services as well as the liability of the platforms.

The U.S. has yet to introduce a comprehensive information privacy law; legislation pertaining to data and digital rights currently exists at both the state and federal level and is often sector-specific. In the United States, The Federal Trade Commission (FTC) is tasked with overseeing the protection of consumers' digital privacy and security, outlining fair information practice principles for the governance of online spaces.[96]

Federal legislation includes the Children's Online Privacy Protection Act (COPPA) which regulates the collection of personally identifiable information from children under the age of thirteen online. The Health Insurance Portability and Accountability Act (HIPAA) includes federal standards for protecting the privacy and security of personal health data stored electronically. The Family Educational and Rights Privacy Act (FERPA) governs access to and the disclosure of student educational records. While state legislation varies in the strength of their protections, the California Consumer Privacy Act (CCPA) of 2018 provides California consumers with the right to access data, know and delete personal information collected by businesses, opt-out of the sale of this information, and the right to non-discrimination for exercising these rights.

Ethical and rights-based principles for AI[edit]

Artificial intelligence and digital self-determination

The proliferation of artificial intelligence (AI), as not a single technology but rather a set of technologies,[97] is increasingly shaping the technologically-mediated spaces for individuals and communities to conduct their lives. From algorithmic recommendation in e-commerce[98] and social media platforms,[99] smart surveillance in policing,[100] to automated resources allocation in public services,[101] the extent of possible AI applications that can influence an individual's autonomy is continuously contested, considering the widespread datafication of people's lives across the socio-economic and political spheres today.

For example, machine learning, a subfield of artificial intelligence, "allows us to extract information from data and discover new patterns, and is able to turn seemingly innocuous data into sensitive, personal data",[102] meaning an individual's privacy and anonymity may be prone to vulnerabilities outside of the original data domain, such as having their social media data harvested for computational propaganda in the election based on micro-targeting.[103]

Another sphere where AI systems can affect the exercising of self-determination is when the datasets on which algorithms are trained mirror the existing structures of inequality, thereby reinforcing structural discrimination that limits certain groups' access to fair treatment and opportunities. In the United States, an AI recruiting tool used by Amazon has shown to discriminate against female job applicants,[104] while an AI-based modelling tool used by the Department of Human Services in Allegheny County, Pennsylvania, to flag potential child abuse has shown to disproportionately profile the poor and racial minority, raising questions about how predictive variables in algorithms could often be "abstractions" that "reflect priorities and preoccupations".[105]

Current landscape of AI principles relevant to digital self-determination

How states attempt to govern the AI industry can shape how AI applications are developed, tested and operated and in what ethical frameworks relevant to many forms of human interests, thereby affecting the degree of digital self-determination exercised by individuals and communities.

In recent years, there has been a proliferation of high-level principles and guidelines documents,[106] providing suggestions for public-sector policies and private-sector code of conduct in a non-binding manner. Compared to the binding laws enacted by states, the landscape of AI ethics principles paints a more diverse picture, with governmental and nongovernmental organisations including private companies, academic institutions and civic society actively developing the ecosystem. A 2020 report by the United Nations identified "over 160 organizational, national and international sets of AI ethics and governance principles worldwide, although there is no common platform to bring these separate initiatives together".[107]

Common themes of AI principles have been emerging as research efforts develop, with many closely linked to the various conditions of digital self-determination, such as control over one's data, protection from biased treatment, and equal access to the benefits offered by AI. A 2020 publication by the Berkman Klein Center for Internet and Society at Harvard University studied thirty-six "especially visible or influential" AI principles documents authored by government and non-governmental actors from multiple geographical regions, and identified eight key themes:

  • Privacy
  • Accountability
  • Safety and Security
  • Transparency and Explainability
  • Fairness and Non-discrimination
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values

However, the report also notes "a wide and thorny gap between the articulation of these high-level concepts and their actual achievement in the real world".[108]

Examples of intergovernmental and governmental AI principles

Currently, few AI governance principles are internationally recognised. The "OECD Principles on AI", adopted by OECD Member States and nine other non-OECD countries in May 2019, integrates elements relevant to digital self-determination such as "inclusive growth", "well-being", "human-centered values and fairness", while emphasizing an individual's ability to appeal and "challenge the outcome of AI system" and the adherence of AI development to "internationally recognized labour rights".[109]

On a national level, numerous state AI policies make reference to AI ethics principles, though in an irregular fashion. Such references can be standalone documents. For example, Japan's government established its "Social Principles of Human-Centric AI",[110] which is closely linked to its "AI strategy 2019: AI for Everyone: People, Industries, Regions and Governments",[111] and a separate set of AI Utilization Guidelines that encourage voluntary adherence and emphasize that AI shall be used to "expand human abilities and creativity", shall not "infringe on a person's individual freedom, dignity or equality", and adheres to the "principle of human dignity and individual autonomy".[112]

AI principles can also be incorporated into a national AI strategy, which primarily focuses on policy instruments advancing AI, such as investment in STEM education and public-private partnerships. For example, India's AI strategy, "National Strategy for Artificial Intelligence" published in June 2018, identifies key areas of high national priority for AI development (healthcare, agriculture, education, urban-/smart-city infrastructure, transportation and mobility), with ethical topics such as privacy and fairness integrated as a forward-looking section.[113]

Opportunities and challenges for AI principles to address self-determination

Non-binding AI principles suggested by actors inside or outside the government might sometimes be further concretized into specific policy or regulation. In 2020, the United Kingdom's government's advisory body on the responsible use of AI, the Centre for Data Ethics and Innovation, proposed specific measures for government, regulators and industry to tackle algorithmic bias in the sectors of financial services, local government, policing and recruitment,[114] with each area relevant to how individuals conduct their ways of life and access socio-economic opportunities without being subjected to unfair treatment.

Cultural and geographical representation has been highlighted as a challenge in ensuring the burgeoning AI norms sufficiently consider unique opportunities and risks faced by the global population, who exercise their autonomy and freedom in vastly different political regimes with varying degrees of rule of law. In 2020, a report published by the Council of Europe reviewed 116 AI principles documents and found that "these soft law documents are being primarily developed in Europe, North America and Asia", while "the global south is currently underrepresented in the landscape of organisations proposing AI ethics guidelines".[115]

See also[edit]


  1. ^ a b The State of Internet in France (PDF). ARCEP. 2021.
  2. ^ a b Belli, Luca (2018). "Network self-determination: When building the Internet becomes a right". IETF Journal.
  3. ^ Deci, E. L., & Ryan, R. M. (1980). Self-determination theory: When the mind mediates behavior. The Journal of Mind and Behavior, 33-43.
  4. ^ Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum.
  5. ^ Ryan, R. M.; Deci, E. L. (2000). "Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being". American Psychologist. 55 (1): 68–78. doi:10.1037/0003-066X.55.1.68 PMID 11392867.
  6. ^ Schwartz, B. (2000). Self-determination: The tyranny of freedom. American psychologist, 55(1), 79.
  7. ^ a b Remolina, Nydia and Findlay, Mark James, The Paths to Digital Self-Determination - A Foundational Theoretical Framework (April 22, 2021). SMU Centre for AI & Data Governance Research Paper No. 03/2021,
  8. ^ a b Center for Humane Technology, “Ledger of Harms”,, accessed May 22, 2021.
  9. ^ Berkman Klein Center for Internet and Society at Harvard University, Digital Self-Determination call for participants (2021) <>, accessed May 5, 2021,
  10. ^ Cologne center for ethics, rights, economics, and social sciences of health, "Digital self-determination",, accessed May 22, 2021,
  11. ^ Belli, Luca (2017). "Network Self-Determination and the Positive Externalities of Community Networks". Community Networks: The Internet by the People, for the People. FGV Direito Rio. p. 24.
  12. ^ Community Networks in Latin America: Challenges, Regulations and Solutions. 2018.
  13. ^ Internet Governance Forum. Promoting Digital Self-Determination (2020)., accessed May 22, 2021,
  14. ^ Centre for AI and Data Governance, Singapore Management University,, accessed May 22, 2021,
  15. ^ "Research sprint examines "digital self-determination" in increasingly interconnected world | Berkman Klein Center". 2021-03-20. Retrieved 25 May 2021.
  16. ^ Jeder Mensch, “The 6 European Fundamental Rights”,, accessed May 22, 2021.
  17. ^ "International Digital Self-Determination Network". International Digital Self-Determination Network.
  18. ^ "Announcing the International Digital Self-Determination Network". The GovLab. 21 October 2021. Retrieved 10 March 2022.
  19. ^ "Directorate of International Law, Swiss Federal Department of Foreign Affairs".
  20. ^ "Centre for Artificial Intelligence and Data Governance at Singapore Management University".
  21. ^ "Berkman Klein Center at Harvard University".
  22. ^ "Global Tech Policy Practice at TUM School of Social Sciences and Technology".
  23. ^ "The GovLab, New York University".
  24. ^ Digital Divide, World Wide Web Foundation, Retrieved 2021-05-28
  25. ^ Closing the Digital Divide, Common Sense Media, Retrieved 2021-05-28
  26. ^ Belli, Luca (2019). "La emergencia de las redes comunitarias y del principio de autodeterminación de red: un ejemplo de gobernanza de Internet". Revista de privacidad y derecho digital. 4 (13): 77–112. ISSN 2444-5762.
  27. ^ Belli, Luca (2018). "Community Networks: Bridging Digital Divides through the Enjoyment of Network Self-determination". The Community Network Manual: how to build the Internet yourself (PDF). FGV Direito Rio. pp. 23–41. ISBN 978-85-9597-029-8.
  28. ^ "What is digital literacy?", Common Sense Media, Archived 2022-02-01 at the Wayback Machine, accessed May 28, 2021
  29. ^ Brunton, F., & Nissenbaum, H. (2015). Obfuscation: A User's Guide for Privacy and Protest. MIT Press
  30. ^ Pariser, Eli (2011-05-12). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin. ISBN 9781101515129. Retrieved 2020-10-11.
  31. ^ Center for Humane Technology, “How Social Media Hacks Our Brains”, Retrieved 2021-05-22.
  32. ^ "What is a data trust?", Data Futures Lab, Mozilla Foundation,
  33. ^ "What is a data common?", Data Futures Lab, Mozilla Foundation,
  34. ^ "What is a data cooperative?", Data Futures Lab, Mozilla Foundation,, accessed May 28, 2020
  35. ^ "What is a data collaborative?", Data Futures Lab, Mozilla Foundation,, accessed May 28, 2020
  36. ^ "What is a data fiduciary?", Data Futures Lab, Mozilla Foundation,, accessed May 28, 2020
  37. ^ Solid Project,, accessed May 28, 2020
  38. ^ "What does it mean? Shifting Power Through Data Governance", Data Futures Lab, Mozilla Foundation,, accessed May 28, 2021
  39. ^ "Dagstuhl Manifesto". Informatik-Spektrum. 34 (4): 413–423. 2011-06-15. doi:10.1007/s00287-011-0552-9. ISSN 0170-6012. S2CID 30112209.
  40. ^ Suter, Viktor (2020), Algorithmic Panopticon: State Surveillance and Transparency in China's Social Credit System, Communications in Computer and Information Science, vol. 1349, Cham: Springer International Publishing, pp. 42–59, doi:10.1007/978-3-030-67238-6_4, ISBN 978-3-030-67237-9, S2CID 235011263, retrieved 2021-05-18
  41. ^ Aho, Brett; Duffield, Roberta (2020-04-02). "Beyond surveillance capitalism: Privacy, regulation and big data in Europe and China". Economy and Society. 49 (2): 187–212. doi:10.1080/03085147.2019.1690275. ISSN 0308-5147. S2CID 218914930.
  42. ^ Brkan, Maja (2019). "Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond". International Journal of Law and Information Technology. 27 (2): 91–121. doi:10.1093/ijlit/eay017. ISSN 0967-0769.
  43. ^ Graham, Mark; Hjorth, Isis; Lehdonvirta, Vili (May 2017). "Digital labour and development: impacts of global digital labour platforms and the gig economy on worker livelihoods". Transfer: European Review of Labour and Research. 23 (2): 135–162. doi:10.1177/1024258916687250. ISSN 1024-2589. PMC 5518998. PMID 28781494.
  44. ^ Galič, Maša; Timan, Tjerk; Koops, Bert-Jaap (2016-05-13). "Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation". Philosophy & Technology. 30 (1): 9–37. doi:10.1007/s13347-016-0219-1. ISSN 2210-5433.
  45. ^ Delacroix, Sylvie (2020). "Social Media Manipulation, Autonomy and Capabilities". SSRN Electronic Journal. doi:10.2139/ssrn.3710786. ISSN 1556-5068. S2CID 234659690.
  46. ^ Ito, M.; Martin, C. (2013). "Connected learning and the future of libraries". Young Adult Library Services. 12 (1): 29–32.
  47. ^ Ito, M.; Martin, C.; Pfister, R.C.; Rafalow, M.H.; Salen, K.; Wortman, A. (2019). Affinity Online: How Connection and Shared Interest Fuel Learning (Volume 2). NYU Press. ISBN 978-1-4798-8890-0.
  48. ^ Gee, J.P. (2017). Teaching, learning, literacy in our high-risk high-tech world: A framework for becoming human. Teachers College Press. ISBN 978-0807758601.
  49. ^ "Designing for Children's Rights Guide". The Designing for Children Rights (D4CR) Association. Retrieved 21 May 2021.
  50. ^ Ávila, J.; Pandya, J.Z. (2013). "Critical Digital Literacies as Social Praxis: Intersections and Challenges". New Literacies and Digital Epistemologies. 54. New York. 29 Broadway 18th Floor, New York, NY 10006: Peter Lang.{{cite journal}}: CS1 maint: location (link)
  51. ^ Steinberg, Stacey (28 April 2021). "Ethical AI? Children's Rights and Autonomy in Digital Spaces". The London School of Economics and Political Science. Retrieved 21 May 2021.
  52. ^ Palfrey, John; Gasser, Urs (2008). Born Digital. Basic Books. ISBN 978-0-465-00515-4.
  53. ^ a b c "General comment No. 12 (2009): The right of the child to be heard". CRC/C/GC/12. UN Committee on the Rights of the Child (CRC). 20 July 2009. Retrieved 21 May 2021.
  54. ^ "General comment No. 14 (2013) on the right of the child to have his or her best interests taken as a primary consideration (art. 3, para. 1)" (PDF). CRC /C/GC/14. UN Committee on the Rights of the Child (CRC). 29 May 2013. Retrieved 21 May 2021.
  55. ^ "General comment No. 25 (2021) on children's rights in relation to the digital environment". CRC/C/GC/25. The UN Committee on the Rights of the Child. 2 March 2021. Retrieved 21 May 2021.
  56. ^ "Children's Privacy Plaintiffs Settle With Disney, Seek Trial With App Developer". Maeley’s Data Privacy News Reports/Lexis Legal News. 6 April 2021. Retrieved 21 May 2021.
  57. ^ Stacey, Kiran; Murphy, Hannah (13 April 2021). "Momentum builds US laws to protect children from Big Tech". Financial Times. Retrieved 21 May 2021.
  58. ^ Lapierre, Matthew A.; Fleming-Milici, Frances; Rozendaal, Esther; McAlister, Anne R.; Castonguay, Jessica (November 2017). "The Effect of Advertising on Children and Adolescents". Pediatrics. 140 (Supplement 2): S152–S156. doi:10.1542/peds.2016-1758V. PMID 29093052.
  59. ^ "Explanatory momerandum to the Age Appropriate Design Code". The U.K. Department for Digital, Culture, Media & Sport. 11 June 2020. Retrieved 21 May 2021.
  60. ^ a b "Best interest of the child, Age appropriate design: a code of practice for online services". The U.K. ICO’s Guide to data protection for Organizations. The UK Information Commissioner's Office. 2 September 2020. Retrieved 21 May 2021.
  61. ^ Spivak, G. 1999. Can the subaltern speak? In Social Theory: The Multicultural and Classic Readings edited by C. Lemert. Boulder, CO: Westview Press. pp. 610-614; Said, E. 1979. Orientalism. New York: Vintage Books; Fanon, F. 1963. The Wretched of the Earth. New York: Grove Press.
  62. ^ Appiah, K. A. 1991. “Is the Post- in Postmodernism the Post- in Postcolonial?” Critical Inquiry, vol 17. pp. 336-357; Shohat, E. 1992. “Notes on the “Post-Colonial.” Social Text, vol 31. pp. 99-113; Benharrousse, R. 2020 “The Dilapidated Prefix: Beyond Postcolonialism’s ‘Post’ and Towards the Process.” In Post-colonial Praxis: Ramifications and Intricacies. Mumbai: Notion Press. pp. 36-57.
  63. ^ Valaskakis, G.G. 2005. Indian Country: Essays on Contemporary Native Culture. Waterloo: Wilfrid Laurier University Press.
  64. ^ Alfred, T. 2009. Wasáse: Indigenous pathways of action and freedom. Toronto: University of Toronto Press.
  65. ^ Kwet, M. 2019 “Digital Colonialism: US Empire and the New Imperialism in the Global South,” Race & Class Volume 60, No. 4 (April). pp. 3-26. doi:10.1177/0306396818823172
  66. ^ Kwet, M. 2018 “Break the hold of digital colonialism,” Mail & Guardian, 29 June. 06-29-00-break-the-hold-of-digital-colonialism
  67. ^ The Daily Records. 2018. “Top 10 Largest Companies in the World by Market Cap,” 26 March. products-companies-reviews/largest-companies-world-by-market-cap-most-valuable/12829.
  68. ^ Bakan, J. 2005. The Corporation: The Pathological Pursuit of Profit and Power. New York, NY: Free Press.
  69. ^ Lessig, L. 1999. Code and Other Laws of Cyberspace. New York: Basic Books.
  70. ^ Reidenberg, J. 1998. “Lex Informatica: The Formation of Information Policy Rules Through Technology,” Texas Law Review 76, no. 3.
  71. ^ Sandvig, C. 2006. The Structural Problems of the Internet for Cultural Policy. In Critical Cyberculture Studies edited by D. Silver and A. Massanari. New York: NYU Press. pp.107-118.
  72. ^ Raine, L. & Wellman, B. 2012. Networked: The New Social Operating System. Cambridge and London: The MIT Press.
  73. ^ Finnemann, Niels Ole (2010). "Christian Fuchs: Internet and society – Social theory in the information age. London: Routledge. 2008". MedieKultur: Journal of Media and Communication Research. 26 (48): 4. doi:10.7146/mediekultur.v26i48.2316. ISSN 1901-9726.
  74. ^ Brown, C.; Czerniewicz, L. (2010). "Debunking the 'digital native': beyond digital apartheid, towards digital democracy". Journal of Computer Assisted Learning. 26 (5): 357–369. doi:10.1111/j.1365-2729.2010.00369.x. hdl:11427/3334. ISSN 0266-4909.
  75. ^ Graham, Mark (2011). "Time machines and virtual portals". Progress in Development Studies. 11 (3): 211–227. doi:10.1177/146499341001100303. ISSN 1464-9934. S2CID 17281619.
  76. ^ a b c Barnard-Ashton, Paula; Adams, Fasloen; Rothberg, Alan; McInerney, Patricia (2018). "Digital apartheid and the effect of mobile technology during rural fieldwork". South African Journal of Occupational Therapy. 48 (2): 20–25. doi:10.17159/23103833/2018/vol48n2a4. ISSN 2310-3833.
  77. ^ "Article 370: What happened with Kashmir and why it matters". BBC News. 2019-08-05. Retrieved 2021-05-14.
  78. ^ "2G mobile Internet services restored in J&K". The Hindu. 2020-01-25. ISSN 0971-751X. Retrieved 2021-05-14.
  79. ^ Jain, Bharti; P, M. Saleem (February 6, 2021). "After 18 months, 4G internet services restored in J&K | India News - Times of India". The Times of India. Retrieved 2021-05-14.
  80. ^ Maqbool, Majid (2019-10-22). "Internet shut down 180 times in J&K over past 8 years". National Herald. Retrieved 2021-05-14.
  81. ^ a b Chakravarti, Ankita (March 4, 2021). "India saw highest number of internet shutdowns in the world in 2020". India Today. Retrieved 2021-05-14.
  82. ^ a b c "Kashmir's Internet Siege - an ongoing assault on digital rights". 2020. Retrieved 2021-05-14.
  83. ^ Maqbool, Majid (2019-10-14). "Kashmir communication shutdown cripples IT companies and start-ups in Valley". National Herald. Retrieved 2021-05-14.
  84. ^ "Kashmir communications shutdown a 'collective punishment' that must be reversed, say UN experts". UN News. 2019-08-22. Retrieved 2021-05-14.
  85. ^ Ashiq, Peerzada (2019-12-07). "In a land without Internet: How the communication blackout is forcing young entrepreneurs out of Kashmir Valley". The Hindu. ISSN 0971-751X. Retrieved 2021-05-14.
  86. ^ "Kashmir: Online Bidding for Mineral Blocks Leaves Locals at a Disadvantage". The Wire. Retrieved 2021-05-14.
  87. ^ "Lack of internet in Kashmir prevents doctors from fighting coronavirus: Report". HT Tech. 2020-03-31. Retrieved 2021-05-14.
  88. ^ Parvaiz, Athar (2020-05-20). "Kashmir internet blackouts hinder health services, contact tracing". Reuters. Retrieved 2021-05-14.
  89. ^ "As Life Moves Online Amid the Pandemic, Kashmiri Education Is Being Left Behind". Retrieved 2021-05-14.
  90. ^ Zargar, Safwat. "A year without high-speed internet ravaged health, education, entrepreneurship in Kashmir". Retrieved 2021-05-14.
  91. ^ Rouvroy, Antoinnette; Poullet, Ives. The right to informational self-determination and the value of self-development: reassessing the importance of privacy for democracy (PDF). Springer. pp. 45–76.
  92. ^ "The European Strategy for Data".
  93. ^ "Proposal for a Regulation on European data governance (Data Governance Act)".
  94. ^ "Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (Text with EEA relevance.)".
  95. ^ "DIRECTIVE (EU) 2018/1808 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in the Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities".
  96. ^ "Privacy Online: A Report to Congress" (PDF). Federal Trade Commission. Retrieved 19 May 2021.
  97. ^ Gasser, Urs; Almeida, Virgilio A.F. (November 2017). "A Layered Model for AI Governance". IEEE Internet Computing. 21 (6): 58–62. doi:10.1109/MIC.2017.4180835. ISSN 1089-7801. S2CID 13089526.
  98. ^ "The history of Amazon's recommendation algorithm". Amazon Science. 2019-11-22. Retrieved 2021-05-18.
  99. ^ Dodds, Laurence (2019-04-01). "Facebook to finally explain the decisions of its news feed algorithm". The Telegraph. ISSN 0307-1235. Retrieved 2021-05-18.
  100. ^ "Met police to begin using live facial recognition cameras in London". The Guardian. 2020-01-24. Retrieved 2021-05-18.
  101. ^ "Nearly half of councils in Great Britain use algorithms to help make claims decisions". The Guardian. 2020-10-28. Retrieved 2021-05-18.
  102. ^ ""The ethics of artificial intelligence: Issues and initiatives"" (PDF). European Parliament. March 2020.
  103. ^ Bradshaw, Samantha; Howard, Philip N. "The Global Disinformation Disorder: 2019 Global Inventory of Organised Social Media Manipulation. Working Paper 2019" (PDF). Oxford, UK: Project on Computational Propaganda.
  104. ^ Dastin, Jeffrey (2018-10-10). "Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. Retrieved 2021-05-18.
  105. ^ Eubanks, Virginia (Jan 2018). "Automating Inequalities: How High-Tech Tools Profile, Police and Punish The Poor". St. Martin’s Press. p. 143.
  106. ^ Jobin, Anna; Ienca, Marcello; Vayena, Effy (September 2019). "The global landscape of AI ethics guidelines". Nature Machine Intelligence. 1 (9): 389–399. arXiv:1906.11668. doi:10.1038/s42256-019-0088-2. ISSN 2522-5839. S2CID 201827642.
  107. ^ ""Roadmap for Digital Cooperation"" (PDF). United Nations. June 2020.
  108. ^ Fjeld, Jessica; Achten, Nele; Hilligoss, Hannah; Nagy, Adam; Srikumar, Madhulika (2020). "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI". SSRN Electronic Journal. doi:10.2139/ssrn.3518482. ISSN 1556-5068. S2CID 214464355.
  109. ^ "The Recommendation on Artificial Intelligence (AI)". OECD. 2019.
  110. ^ "Social Principles of Human-centric AI" (PDF). Government of Japan. 2019.
  111. ^ "AI Strategy 2019: AI for Everyone: People, Industries, Regions and Governments" (PDF). Integrated Innovation Strategy Promotion Council, Government of Japan. June 11, 2019.
  112. ^ "AI Utilization Guidelines" (PDF). Government of Japan. August 9, 2019.
  113. ^ "National Strategy for Artificial Intelligence: #AI for All (Discussion Paper)'" (PDF). Niti Aayog. 2018.
  114. ^ "CDEI proposes a roadmap to tackle algorithmic bias". Centre for Data Ethics and Innovation, 27 November 2020.
  115. ^ "Towards Regulation of AI Systems". CAHAI Secretariat, Council of Europe. December 2020.