Social translucence

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Social translucence (also referred as social awareness) is a term that was proposed by Thomas Erickson and Wendy Kellogg to refer to "design digital systems that support coherent behavior by making participants and their activities visible to one another".

Social translucence represents a tool for transparency in socio-technical systems, which function is to

Social translucence is, in particular, a core element in online social networking such as Facebook or LinkedIn, in which they intervene in the possibility for people to expose their online identity, but also in the creation of awareness of other people activities, that are for instance present in the activity feeds that these systems make available.

Social translucence mechanisms have been made available in many web 2.0 systems such as:

Background[edit]

Participation of people in online communities, in general, differ from their participatory behavior in real-world collective contexts. Humans in daily life are used to making use of "social cues" for guiding their decisions and actions e.g. if a group of people is looking for a good restaurant to have lunch, it is very likely that they will choose to enter to a local that have some customers inside instead of one that it is empty (the more crowded restaurant could reflect its popularity and in consequence, its quality of service). However, in online social environments, it is not straightforward how to access to these sources of information which are normally being logged in the systems, but this is not disclosed to the users.

There are some theories that explain how this social translucence can affect the behavior of people in real-life scenarios. The American philosopher George Herbert Mead states that humans are social creatures, in the sense that people's actions cannot be isolated from the behavior of the whole collective they are part of because every individuals' acts are influenced by larger social practices that act as a general behavior's framework.[2] In his performance framework, the Canadian sociologist Erving Goffman postulates that in everyday social interactions individuals perform their actions by collecting information from others first, in order to know in advance what they may expect from them and in this way being able to plan how to behave more effectively.[3]

Principles[edit]

According to Erickson et al., social translucent systems should respect the principles of visibility (making significant social information available to users), awareness (bringing our social rules to guide our actions based on external social cues) and accountability (being able to identify who did what and when) in order to allow people to effectively facilitate users communication and collaboration in virtual environments.[4] Zolyomi et al. proposed the principle of identity as a fourth dimension for social translucence by arguing that the design of socio-technical systems should have a rich description of who is visible, in order to give people control over disclosure and mechanisms to advocate for their needs.[5] McDonald et al. proposed a system architecture for structuring the development of social translucent systems, which comprises two dimensions: types of user actions in the system, and a second describing the processing and interpretation done by the system. This framework can guide designers to determine what activities are important to social translucence and need to be reflected, and how interpretive levels of those actions might provide contextual salience to the users [1]

Effects[edit]

Benefits[edit]

In the same way that in the real-world, providing social cues in virtual communities can help people to understand better the situations they face in these environments, to alleviate their decision-making processes by enabling their access to more informed choices, to persuade them to participate in the activities that take place there, and to structure their own schedule of individual and group activities more efficiently.[6]

In this frame of reference, an approach called "social context displays" has been proposed for showing social information -either from real or virtual environments- in digital scenarios. It is based on the use of graphical representations to visualize the presence and activity traces of a group of people, thus providing users with a third-party view of what is happening within the community i.e. who are actively participating, who are not contributing to the group efforts, etc. This social-context-revealing approach has been studied in different scenarios (e.g. IBM video-conference software, large community displaying social activity traces in a shared space called NOMATIC*VIZ), and it has been demonstrated that its application can provide users with several benefits, like providing them with more information to make better decisions and motivating them to take an active attitude towards the management of their self and group representations within the display through their actions in the real-life.[6]

The feeling of personal accountability in front of others that social translucence can report to users can be used for the design of systems for supporting behavior change (e.g. weight loss, smoking cessation), if combined with the appropriate type of feedback.[7]

Concerns[edit]

By making the traces of activity of users publicly available for others to access it is natural that it can raise users concerns related to which are their rights over the data they generate, who are the final users that can have access to their information and how they can know and control their privacy policies.[6] There are several perspectives that try to contextualize this privacy issue. One perspective is to see privacy as a tradeoff between the degree of invasion to the personal space and the number of benefits that the user could perceive from the social system by disclosing their online activity traces.[8] Another perspective is examining the concession between the visibility of people within the social system and their level of privacy, which can be managed at an individual or at a group level by establishing specific permissions for allowing others to have access to their information. Other authors state that instead of enforcing users to set and control privacy settings, social systems might focus on raising their awareness about who their audiences are so they can manage their online behavior according to the reactions they expect from those different user groups.[6]

See also[edit]

References[edit]

  1. ^ a b McDonald, David W.; Gokhman, Stephanie; Zachry, Mark (2012). Building for social translucence. New York, New York, USA: ACM Press. doi:10.1145/2145204.2145301. ISBN 978-1-4503-1086-4. 
  2. ^ Mead, George. H. (1934). Mind, Self, and Society: From the Standpoint of a Social Behaviorist. Chicago: University of Chicago Press. 
  3. ^ Goffman, Erving (1990). The presentation of self in everyday life. London: Penguin. ISBN 978-0-14-013571-8. 
  4. ^ Erickson, Thomas; Kellogg, Wendy A. (2000-03-01). "Social translucence: an approach to designing systems that support social processes". ACM Transactions on Computer-Human Interaction. Association for Computing Machinery (ACM). 7 (1): 59–83. doi:10.1145/344949.345004. ISSN 1073-0516. 
  5. ^ Zolyomi, Annuska; Ross, Anne Spencer; Bhattacharya, Arpita; Milne, Lauren; Munson, Sean A. (2018). Values, Identity, and Social Translucence. New York, New York, USA: ACM Press. doi:10.1145/3173574.3174073. ISBN 978-1-4503-5620-6. 
  6. ^ a b c d Ding, Xianghua; Erickson, Thomas; Kellogg, Wendy A.; Patterson, Donald J. (2011). "Informing and performing: investigating how mediated sociality becomes visible". Personal and Ubiquitous Computing. 16 (8): 1095–1117. doi:10.1007/s00779-011-0443-8. ISSN 1617-4909. 
  7. ^ Barreto, Mary; Szóstek, Agnieszka; Karapanos, Evangelos (2013). An initial model for designing socially translucent systems for behavior change. New York, New York, USA: ACM Press. doi:10.1145/2499149.2499162. ISBN 978-1-4503-2061-0. 
  8. ^ Patil, Sameer; Lai, Jennifer (2005). "Who gets to know what when: configuring privacy permissions in an awareness application": 101. doi:10.1145/1054972.1054987.