Talk:Multimodality

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Wiki Education Foundation-supported course assignment[edit]

This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Wenqicui, Awouignandji, Tyejonesjr.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 01:16, 18 January 2022 (UTC)[reply]

Wiki Education Foundation-supported course assignment[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 23 January 2019 and 8 May 2019. Further details are available on the course page. Student editor(s): Fuzzypenguin1998, Oameadows, Hledbetter, Luluanonymous.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 01:16, 18 January 2022 (UTC)[reply]

links to other Wikipedia articles[edit]

Have been interested in this topic and am glad to see the start of an article. However, has the article been inappropriately flagged for not including links to other Wikipedia articles? I see multiple embedded links to other Wikipedia articles. I will share the link with other editors I know who work more directly in writing studies and could probably contribute. — Preceding unsigned comment added by 128.186.130.128 (talk) 17:39, 13 May 2013 (UTC)[reply]

Inadequate title[edit]

The current title "Multimodality" is insufficiently descriptive and also not specific enough. The title should be something like "Multimodality in the media" or similar. Roger (Dodger67) (talk) 11:14, 14 May 2013 (UTC)V[reply]

Outdated content[edit]

With it being 2018, some of this information is not as up-to-date and current as it should be, especially when concerning a topic highly involved with media and technology that is ever-changing. I encourage anyone with more current info to update and share. Nhuffman (talk) 21:14, 7 February 2018 (UTC)[reply]

External links modified (February 2018)[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Multimodality. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 03:50, 8 February 2018 (UTC)[reply]

new sources[edit]

Hi fellow wikipedians, ... Ryan Jiawei Xing (talk) 17:12, 3 August 2018 (UTC)[reply]

New section[edit]

Hello, we think it would be beneficial to add a new subhead under Education-Classroom Literacy called "Higher Education"Oameadows (talk) 18:34, 8 April 2019 (UTC)oameadows[reply]

Changes to education section[edit]

Hi wikipedians, we would like to add sources to this section, as well as delete and move some portions to help the section flow better Fuzzypenguin1998 (talk) 22:35, 9 April 2019 (UTC)fuzzypenguin1998[reply]

Changes to the multiliteracy section[edit]

Hello wikipedias, the structure of the multilitercy section seems a bit confusing to me. I thinking of swapping the first and second paragraph and rewording the first paragraph. My suggested edit for the intro paragraph is as follows:

"Multilteracy is the concept of understanding information through various methods of communication and being proficient in those methods. With the growth of technology, there are more ways to communicate than ever before, making it necessary for our definition of literacy to change in order to better accommodate these new technologies. These new technologies consist of tools such as text messaging, social media, and blogs.[32] However, these modes of communication often employ multiple mediums at once such as audio, video, pictures, and animation. Thus, making content multimodal."Hledbetter (talk) 17:50, 10 April 2019 (UTC)hledbetter[reply]


Hello Wikipedians, the following is our final suggested edits to the education section of this page. Unless anyone has some suggestions or objections, we will be posting on Monday.

Education[edit]

Multimodality in the 21st century has caused educational institutions to consider changing the forms of its traditional aspects of classroom education. With a rise in digital and Internet literacy, new modes of communication are needed in the classroom in addition to print, from visual texts to digital e-books. Rather than replacing traditional literacy values, multimodality augments and increases literacy for educational communities by introducing new forms. According to Miller and McVee, authors of Multimodal Composing in Classrooms, “These new literacies do not set aside traditional literacies. Students still need to know how to read and write, but new literacies are integrated."[1] The learning outcomes of the classroom stay the same, including – but are not limited to – reading, writing, and language skills. However, these learning outcomes are now being presented in new forms as multimodality in the classroom which suggests a shift from traditional media such as paper-based text to more modern media such as screen-based texts. The choice to integrate multimodal forms in the classroom is still controversial within educational communities. The idea of learning has changed over the years and now, some argue, must adapt to the personal and affective needs of new students. In order for classroom communities to be legitimately multimodal, all members of the community must share expectations about what can be done with through integration, requiring a "shift in many educators’ thinking about what constitutes literacy teaching and learning in a world no longer bound by print text."[2]

Great work here! I made a few small grammatical corrections.Cathygaborusf (talk) 21:29, 16 April 2019 (UTC)cathygaborusf[reply]

Multiliteracy[edit]

Multilteracy is the concept of understanding information through various methods of communication and being proficient in those methods. With the growth of technology, there are more ways to communicate than ever before, making it necessary for the definition of literacy to change in order to better accommodate these new technologies (add benefits). These new technologies consist of tools such as text messaging, social media, and blogs.[3] However, these modes of communication often employ multiple media simultaneously such as audio, video, pictures, and animation. Thus, making content multimodal. The culmination of these different media is referred to as content convergence, which has become a cornerstone of multimodal theory. Within our modern digital discourse, content has become accessible to many, remixable, and easily spreadable, allowing ideas and information to be consumed, edited, and improved by the general public. An example being Wikipedia, the platform allows free consumption and authorship of its’ work which in turn facilitates the spread of knowledge through the efforts of a large community. It creates a space in which authorship has become collaborative and the product of said authorship is improved by that collaboration. As distribution of information has grown through this process of content convergence it has become necessary for our understanding of literacy to evolve with it.[4]

Double check Wikipedia's policies, you might need to cite most of these sentences, not just the last one.Cathygaborusf (talk) 21:34, 16 April 2019 (UTC)cathygaborusf[reply]

The shift away from written text as the sole mode of nonverbal communication has caused the traditional definition of literacy to evolve.[5] While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and thus, a new idea of what it means to be literate. Text, whether it is academic, social, or for entertainment purposes, can now be accessed in a variety of different ways and edited by several individuals on the Internet,


Make this the start of a new sentence.Cathygaborusf (talk) 21:34, 16 April 2019 (UTC)cathygaborusf[reply]

in this way texts that would typically be concrete become amorphous through the process of collaboration . The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages.[5] Many media can be used separately and individually.


Combining and repurposing one to for another has contributed to the evolution of different literacies. I don't understand this sentence/Cathygaborusf (talk) 21:34, 16 April 2019 (UTC)cathygborusf[reply]


Communication is spread across a medium through content convergence, such as a blog post accompanied by images and an embedded video. This idea of combining media gives new meaning to the concept of translating a message. The culmination of varying forms of media allows for content to be either reiterated, or supplemented by its parts. This reshaping of information from one mode to another is known as transduction.[5] As information changes from one mode to the next, comprehension of its message is attributed to multiliteracy. Xiaolo Bao defines three succeeding learning stages that make up multiliteracy. Grammar-Translation Method, Communicative Method, and Task-Based Method. Simply put, they can be described as the fundamental understanding of syntax and its function, the practice of applying that understanding to verbal communication, and lastly, the application of said textual and verbal understandings to hands-on activities. In an experiment conducted by the Canadian Center of Science and Education, students were either placed in a classroom with a multimodal course structure, or a classroom with a standard learning course structure as a control group. Tests were administered throughout the length of the two courses, with the multimodal course concluding in a higher learning success rate, and reportedly higher rate of satisfaction among students. This indicates that applying multimodality to instruction is found to yield overall better results in developing multiliteracy than conventional forms of learning when tested in real-life scenarios.[6]

Classroom literacy[edit]

Multimodality in classrooms has brought about the need for an evolving definition of literacy. According to Gunther Kress, theorist of multimodality, literacy usually refers to the combination of letters and words to make messages and meaning and can often be attached to other words in order to express knowledge of the separate fields, such as visual- or computer-literacy. However, as multimodality becomes more common, not only in classrooms, but in work and social environments, the definition of literacy extends beyond the classroom and beyond traditional texts. Instead of referring only to reading and alphabetic writing, or being extended to other fields, literacy and its definition now encompass multiple modes. It has become more than just reading and writing, and now includes visual, technological, and social uses among others[7]


It is appropriate here to name the university (and to provide a link to its Wikipedia page). And, to be most formal, you should use "based on," not "based off."Cathygaborusf (talk) 21:39, 16 April 2019 (UTC)cathygaborusf[reply]

A university writing and communication program created a definition of multimodality based off the acronym, WOVEN. The acronym explains how communication can be written, oral, visual, electronic, and nonverbal. Communication has multiple modes that can work together to create meaning and understanding. The goal of the program is to ensure students are able to communicate effectively in their everyday lives using various modes and media.[8] As classroom technologies become more prolific, so do multimodal assignments. Students in the 21st century have more options for communicating digitally, be it texting, blogging, or through social media.[9]This rise in computer-controlled communication has required classes to become multimodal in order to teach students the skills required in the 21st-century work environment.[9] However, in the classroom setting, multimodality is more than just combining multiple technologies, but rather creating meaning through the integration of multiple modes. Students are learning through a combination of these modes, including sound, gestures, speech, images and text. For example, in digital components of lessons, there are often pictures, videos, and sound bites as well as the text to help students grasp a better understanding of the subject. Multimodality also requires that teachers move beyond teaching with just text, as the printed word is only one of many modes students must learn and use.[7][9][10] The application of visual literacy in English classroom can be traced back to 1946 when the instructor's edition of the popular Dick and Jane elementary reader series suggested teaching students to "read pictures as well as words" (p. 15). [11] During the 1960s, a couple of reports issued by NCTE Spell out NCTE Cathygaborusf (talk) 21:39, 16 April 2019 (UTC)cathygaborusf[reply]

suggested using television and other mass media such as newspapers, magazines, radio, motion pictures, and comic books in English classroom. The situation is similar in postsecondary writing instruction. Since 1972, visual elements have been incorporated into some popular twentieth-century college writing textbooks like James McCrimmon's Writing with a Purpose.[11]

Higher education[edit]

Colleges and universities around the world are beginning to use multimodal assignments to adapt to the technology currently available. Assigning multimodal work also requires professors to learn how to teach multimodal literacy. Implementing multimodality in higher education is being researched to find out the best way to teach and assign multimodal tasks.[10] Fuzzypenguin1998 (talk) 00:52, 17 April 2019 (UTC)fuzzypenguin1998[reply]

This whole section is super specific. Can you give it more context by starting with some general information about multimodal literacy and higher education before jumping into one specific study?Cathygaborusf (talk) 21:40, 16 April 2019 (UTC)cathygaborusf[reply]


Multimodality in the college setting can be seen in an article by Teresa Morell, where she discusses how teaching and learning elicit meaning through modes such as language, speaking, writing, gesturing, and space. The study observes an instructor who conducts a multimodal group activity with students. Previous studies observed different classes using modes such as gestures, classroom space, and PowerPoints. The current study observes an instructors combined use of multiple modes in teaching to see its effect on student participation and conceptual understanding. She explains the different spaces of the classroom, including the authoritative space, interactional space, and personal space. The analysis displays how an instructors multimodal choices involve student participation and understanding. On average the instructor used three to four modes, most often being some kind of gaze, gesture, and speech. He got students to participate by formulating a group definition of cultural stereotypes. It was found that those who are learning a second language depend on more than just spoken and written word for conceptual learning, meaning multimodal education has benefits. [12][10] Multimodal assignments involve many aspects other than written words, which may be beyond an instructors education. Educators have been taught how to grade traditional assignments, but not those that utilize links, photos, videos or other modes. Dawn Lombardi is a college professor who admitted to her students that she was a bit "technologically challenged," when assigning a multimodal essay using graphics. The most difficult part regarding these assignments is the assessment. Educators struggle to grade these assignments because the meaning conveyed may not be what the student intended. They must return to the basics of teaching to configure what they want their students to learn, achieve, and demonstrate in order to create criteria for multimodal tasks. Lombardi made grading criteria based on creativity, context, substance, process, and collaboration which was presented to the students prior to beginning the essay.[10] Another type of visuals-related writing task is visual analysis, especially advertising analysis, which has begun in the 1940s and has been prevalent in postsecondary writing instruction for at least 50 years. This pedagogical practice of visual analysis did not focus on how visuals including images, layout, or graphics are combined or organized to make meanings.[11] Then, through the following years, the application of visuals in composition classroom has been continually explored and the emphasis has been shifted to the visual features—margins, page layout, font, and size—of composition and its relationship to graphic design, web pages, and digital texts which involve images, layout, color, font, and arrangements of hyperlinks. In line with the New London Group, George (2002) argues that both visual and verbal elements are crucial in multimodal designs.[11] Acknowledging the importance of both language and visuals in communication and meaning making, Shipka (2005) further advocates for a multimodal, task-based framework in which students are encouraged to use diverse modes and materials—print texts, digital media, videotaped performances, old photographs—and any combinations of them in composing their digital/multimodal texts. Meanwhile, students are provided with opportunities to deliver, receive, and circulate their digital products. In so doing, students can understand how systems of delivery, reception, and circulation interrelate with the production of their work.[13] Hledbetter (talk) 22:21, 13 April 2019 (UTC)hledbetter[reply]

Wiki Education assignment: Language in Advertising[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 22 August 2022 and 16 December 2022. Further details are available on the course page. Student editor(s): Puentesjuan09 (article contribs).

— Assignment last updated by Puentesjuan09 (talk) 18:27, 12 December 2022 (UTC)[reply]

Wikipedia Article Edits[edit]

I come to suggest a revision to the definition section of this artilce. For being the definition, it is both too long and too convoluted. I suggest polishing what has already been written, as well as removing the last paragraph of additional information that is not crucial to defining multimodality. The sources will be left untouched, but the direct quoting has been removed. Sobphie ‧₊˚✩彡 (talk) 20:56, 27 February 2023 (UTC)[reply]

Here is the suggested revised section. Citations will be adjusted/added during publishing:
Although multimodality discourse mentions both medium and mode, these terms are not synonymous. However, their precise extents may overlap depending on how precisely (or not) individual authors and traditions use the terms.
Gunther Kress's scholarship on multimodality is canonical with social semiotic approaches and has considerable influence in many other approaches, such as in writing studies. Kress defines ‘mode’ in two ways. One:  a mode is something that can be socially or culturally shaped to give something meaning. Images, pieces of writing, and speech patterns are all examples of modes. Two: modes are semiotic, shaped by intrinsic characteristics and their potential within their medium, as well as what is required of them by their culture or society.
Thus, every mode has a distinct historical and cultural potential and or limitation for its meaning. For example, if we broke down writing into its modal resources, we would have grammar, vocabulary, and graphic "resources" as the acting modes. Graphic resources can be further broken down into font size, type, color, size, spacing within paragraphs, etc. However, these resources are not deterministic. Instead, modes shape and are shaped by the systems in which they participate. Modes may aggregate into multimodal ensembles and be shaped over time into familiar cultural forms. A good example of this is films, which combine visual modes (in setting and in attire), modes of dramatic action and speech, and modes of music or other sounds. Studies of multimodal work in this field include van Leeuwen; Bateman and Schmidt; and Burn and Parker's theory of the Kineikonic Mode.
In social semiotic accounts, a medium is the substance in which meaning is realized and through which it becomes available to others. Mediums include video, image, text, audio, etc. Socially, a medium includes semiotic, sociocultural, and technological practices. Examples include film, newspapers, billboards, radio, television, a classroom, etc. Multimodality also makes use of the electronic medium by creating digital modes with the interlacing of image, writing, layout, speech, and video. Mediums have become modes of delivery that take the current and future contexts into consideration. Sobphie ‧₊˚✩彡 (talk) 21:00, 27 February 2023 (UTC)[reply]