User:Tuo98683/sandbox
READY FOR GRADING
[edit]Lead
[edit]Digital video was first introduced commercially in 1986 with the Sony D1 format[1], which recorded an uncompressed standard definition component video signal in digital form. In addition to uncompressed formats, popular compressed digital video formats today include H.264 and MPEG-4. Modern interconnect standards used for playback of digital video include HDMI, DisplayPort, Digital Visual Interface (DVI) and serial digital interface (SDI).
~~~~
History
[edit]Digital video cameras
[edit]Further information: Digital cinematography, Image sensor, and Video camera
The basis for digital video cameras are metal-oxide-semiconductor (MOS) image sensors. The first practical semiconductor image sensor was the charge-coupled device (CCD), invented in 1969 by Willard S. Boyle, who won a Nobel Prize for his work in physics. Based on MOS capacitor technology. Following the commercialization of CCD sensors during the late 1970s to early 1980s, the entertainment industry slowly began transitioning to digital imaging and digital video over from analog video the next two decades. The CCD was followed by the CMOS active-pixel sensor (CMOS sensor), developed in the 1990s. CMOS are beneficial because of their small size, high speed, and low power usage. CMOS are most commonly found today in the digital cameras in iPhones, used as the image censor for the device.
Digital video coding
[edit]Further information: Video coding format § History
In the 1970s, pulse-code modulation (PCM) induced the birth of digital video coding, demanding high bitrates of 45-140 Mbps for standard definition (SD) content[1]. By the 1980s, the discrete cosine transform (DCT) became the standard for digital video compression[2].
~~~~
MPEG-1, developed by the Motion Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which became the standard video format for DVD and SD digital television. It was followed by MPEG-4/H.263 in 1999, and then in 2003 it was followed by H.264/MPEG-4 AVC, which has become the most widely used video coding standard[3].
~~~~
Overview
[edit]Digital video comprises a series of digital images displayed in rapid succession. In the context of video, these images are called frames. The rate at which frames are displayed is known as the frame rate and is measured in frames per second (FPS). Every frame is an digital image and so comprises a formation of pixels. The color of a pixel is represented by a fixed number of bits of that color which the information of the color is stored within the image[4]. The more bits, the more subtle variations of colors can be reproduced. For example, 8-bit captures 256 levels per channel, and 10-bit captures 1,024 levels per channel[5]. This is called the color depth, or bit depth, of the video.
~~~~
Bit rate and BPP
[edit]By definition, bit rate is a measurement of the rate of information content from the digital video stream. In the case of uncompressed video, bit rate corresponds directly to the quality of the video because bit rate is proportional to every property that affects the video quality. Bit rate is an important property when transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also important when dealing with the storage of video because, as shown above, the video size is proportional to the bit rate and the duration. Video compression is used to greatly reduce the bit rate while having little effect on quality[6].
~~~~
Constant bit rate versus variable bit rate
[edit]BPP represents the average bits per pixel. There are compression algorithms that keep the BPP almost constant throughout the entire duration of the video. In this case, we also get video output with a constant bitrate (CBR). This CBR video is suitable for real-time, non-buffered, fixed bandwidth video streaming (e.g. in videoconferencing). Since not all frames can be compressed at the same level, because quality is more severely impacted for scenes of high complexity, some algorithms try to constantly adjust the BPP. They keep the BPP high while compressing complex scenes and low for less demanding scenes[7]. This way, it provides the best quality at the smallest average bit rate (and the smallest file size, accordingly). This method produces a variable bitrate because it tracks the variations of the BPP.
Technical overview
[edit]~~~~
As of 2017, the highest resolution demonstrated for digital video generation is 132.7 megapixels (15360 × 8640 pixels)[8]. The highest speed is attained in industrial and scientific high speed cameras that are capable of filming 1024x1024 video at up to 1 million frames per second for brief periods of recording.
Technical Properties
[edit]Live digital video consumes bandwidth. Recorded digital video consumes data storage. The amount of bandwidth or storage required is determined by the frame size, color depth and frame rate. Each pixel consumes a number of bits determined by the color depth. The data required to represent a frame of data is determined by multiplying by the number of pixels in the image. The bandwidth is determined by multiplying the storage requirement for a frame by the frame rate. The overall storage requirements for a program can then be determined by multiplying bandwidth by the duration of the program.
These calculations are accurate for uncompressed video, but due to the relatively high bit rate of uncompressed video, video compression is extensively used. In the case of compressed video, each frame requires only a small percentage of the original bits. This reduces the data or bandwidth consumption by a factor of 5 to 12 times when using lossless compression, but more commonly, lossy compression is used due to its reduction of data consumption by factors of 20 to 200[9]. Note that it is not necessary that all frames are equally compressed by the same percentage. Instead, consider the average factor of compression for all the frames taken together.
Interfaces and Cables
[edit]~~~~
Storage formats
[edit]Encoding
[edit]See also: Video coding format and Video codec
- CCIR 601 used for broadcast stations
- MPEG-4 good for online distribution of large videos and video recorded to flash memory
- MPEG-2 used for DVDs, Super-VCDs, and many broadcast television formats
- MPEG-1 used for video CDs
- H.261
- H.263
- H.264 also known as MPEG-4 Part 10, or as AVC, used for Blu-ray Discs and some broadcast television formats
- VC-2 also known as Dirac Pro
- H.265 also known as MPEG-H Part 2, or as HEVC
- MOV used for QuickTime framework
- Theora used for video on Wikipedia
References
[edit]- ^ a b Hussain, Tariq (2020). Multimedia Computing. Booksclinic Publishing. ISBN 9789390192984.
- ^ Hanzo, Lajos (2007). Video compression and communications : from basics to H.261, H.263, H.264, MPEG2, MPEG4 for DVB and HSDPA-style adaptive turbo-transceivers. Peter J. Cherriman, Jürgen Streit, Lajos Hanzo (2nd ed.). Hoboken, NJ: IEEE Press. ISBN 978-0-470-51992-9. OCLC 181368622.
- ^ Christ, Robert D. (2013). The ROV manual : a user guide for remotely operated vehicles. Robert L. Wernli (2nd ed.). Oxford. ISBN 978-0-08-098291-5. OCLC 861797595.
{{cite book}}
: CS1 maint: location missing publisher (link) - ^ Winkelman, Roy (2018). "Tech-Ease, What is bit depth?".
- ^ Steiner, Shawn (12 December 2018). "B&H, 8-Bit, 10-Bit, What Does It All Mean for Your Videos?".
- ^ Acharya, Tinku (2005). JPEG2000 standard for image compression : concepts, algorithms and VLSI architectures. Ping-Sing Tsai. Hoboken, N.J.: Wiley-Interscience. ISBN 0-471-65375-6. OCLC 57585202.
- ^ Weise, Marcus (2013). How video works. Diana Weynand (2nd ed.). New York. ISBN 1-136-06982-8. OCLC 1295602475.
{{cite book}}
: CS1 maint: location missing publisher (link) - ^ "4K, 8K, 16K – Are You Ready for the Resolution Evolution?". CEPRO. 2017-04-19. Retrieved 2022-03-24.
- ^ Vatolin, Dmitriy. "Lossless Video Codecs Comparison 2007". www.compression.ru. Retrieved 2022-03-29.