This article may rely excessively on sources too closely associated with the subject, potentially preventing the article from being verifiable and neutral. (May 2018) (Learn how and when to remove this template message)
|Developed by||Alliance for Open Media|
|Initial release||March 28, 2018|
|Type of format||Compressed video|
AOMedia Video 1 (AV1), is an open, royalty-free video coding format designed for video transmissions over the Internet. It is being developed by the Alliance for Open Media (AOMedia), a consortium of firms from the semiconductor industry, video on demand providers, and web browser developers, founded in 2015.
AV1 is meant to succeed its predecessor VP9 and compete with HEVC/H.265 from the Moving Picture Experts Group. It is the primary contender for standardization by the video standard working group NetVC of the Internet Engineering Task Force (IETF). The group has put together a list of criteria to be met by the new video standard.
- 1 History
- 2 Purpose
- 3 Technology
- 4 Quality and efficiency
- 5 Profiles and levels
- 6 Adoption
- 7 AV1 Still Image File Format (AVIF)
- 8 References
- 9 External links
The first official announcement of the project came with the press release on the formation of the Alliance on 1 September 2015. The increased usage of its predecessor VP9 is attributed to confidence in the Alliance and development of AV1 as well as the pricey and complicated licensing situation of HEVC (High Efficiency Video Coding).
The roots of the project precede the Alliance, however. Individual contributors started experimental technology platforms years before: Xiph's/Mozilla's Daala already published code in 2010, VP10 was announced on 12 September 2014, and Cisco's Thor was published on 11 August 2015. The first version 0.1.0 of the AV1 reference codec was published on 7 April 2016.
Soft feature freeze was at the end of October 2017, but a few significant features were decided to continue developing beyond this. The bitstream format was projected to be frozen in January 2018; however, this was delayed due to unresolved critical bugs as well as last changes to transformations, syntax, the prediction of motion vectors, and the completion of legal analysis. The Alliance announced the release of the AV1 bitstream specification on 28 March 2018, along with a reference, software-based encoder and decoder. On 25 June 2018, a validated version 1.0.0 of the specification was released.
Martin Smole from AOM member Bitmovin says that the computational efficiency of the reference encoder is the greatest remaining challenge after the bitstream format freeze. While still working on the format, the encoder was not targeted for productive use and didn't receive any speed optimizations. Therefore, it works orders of magnitude slower than e.g. existing HEVC encoders, and development is planned to shift its focus towards maturing the reference encoder after the freeze.
To fulfill the goal of being royalty free, the development process is such that no feature is adopted before it has been independently double checked that it does not infringe on patents of competing companies. This contrasts with its main competitor HEVC, for which a review of the intellectual property rights (IPR review) was not part of the standardization process. The latter reviewing practice is stipulated in the ITU-T's definition of an open standard.
The possible existence of yet unknown patents has been a recurring concern in the field of royalty-free multimedia formats; the concern has been raised regarding AV1, and previously VP9, Theora and IVC. The problem of unforeseen patents is not unique to royalty-free formats, but it uniquely threatens their status as royalty-free. In contrast, IPR avoidance has not traditionally been a priority in MPEG's business model for royalty-bearing formats (although the MPEG chairman argues it has to change).
|Patent licensing||AV1, VP9, Theora, etc||HEVC, AVC, etc||GIF, MP3, MPEG-1, etc|
|By known patent holders||royalty-free||royalty bearing||Expired|
|By unknown patent holders||impossible to know until expiry|
Under patent rules adopted from the World Wide Web Consortium (W3C), technology contributors license their AV1-connected patents to anyone, anywhere, anytime based on reciprocity, i.e. as long as the user does not engage in patent litigation. As a defensive condition, anyone engaging in patent litigation loses the right to the patents of all patent holders.
The performance goals include "a step up from VP9 and HEVC" in efficiency for a low increase in complexity. NETVC's efficiency goal is 25% improvement over HEVC. The primary complexity concern is for software decoding, since hardware support will take time to reach users. However, for WebRTC, live encoding performance is also relevant, which is Cisco's agenda: Cisco is a manufacturer of videoconferencing equipment, and their Thor contributions aim at "reasonable compression at only moderate complexity".
Feature wise, it is specifically designed for real-time applications (especially WebRTC) and higher resolutions (wider color gamuts, higher frame rates, UHD) than typical usage scenarios of the current generation (H.264) of video formats where it is expected to achieve its biggest efficiency gains. It is therefore planned to support the color space from ITU-R Recommendation BT.2020 and 10 and 12 bits of precision per color component. AV1 is primarily intended for lossy encoding, although lossless compression is supported as well.
AV1 is a traditional block-based frequency transform format featuring new techniques, several of which were developed in experimental formats that have been testing technology for a next-generation format after HEVC and VP9. Based on Google's experimental VP9 evolution project VP10, AV1 incorporates additional techniques developed in Xiph's/Mozilla's Daala and Cisco's Thor.
|Developer(s)||Alliance for Open Media|
|Written in||C, assembly|
The Alliance published a reference implementation written in C and assembly language (
aomdec) as free software under the terms of the BSD 2-Clause License. Development happens in public and is open for contributions, regardless of AOM membership.
There is another open source encoder, namely rav1e, which – unlike aomenc – aims to be the simplest and fastest conforming encoder at the expense of efficiency.
The development process is such that coding tools are added to the reference codebase as experiments, controlled by flags that enable or disable them at build time, for review by other group members as well as specialized teams that help with and ensure hardware friendliness and compliance with intellectual property rights (TAPAS). Once the feature gains some support in the community, the experiment can be enabled by default, and ultimately have its flag removed when all of the reviews are passed. Experiment names are lowercased in the configure script and uppercased in conditional compilation flags.
To transform the error remaining after prediction to the frequency domain, AV1 uses square and rectangular DCTs, as well as an asymmetric DST for blocks where the top and/or left edge is expected to have lower error thanks to prediction from nearby pixels.
Prediction can happen for bigger units (≤128×128), and they can be subpartitioned in more ways. "T-shaped" partitioning schemes for coding units are introduced, a feature developed for VP10. Two separate predictions can now be used on spatially different parts of a block using a smooth, wedge-shaped transition line (wedge-partitioned prediction). This enables more accurate separation of objects without the traditional staircase lines along the boundaries of square blocks.
More encoder parallelism is possible thanks to configurable prediction dependency between tile rows.
AV1 performs internal processing in higher precision (10 or 12 bits per sample), which leads to compression improvement due to smaller rounding errors in reference imagery.
Predictions can be combined in more advanced ways (than a uniform average) in a block (compound prediction), including smooth and sharp transition gradients in different directions (wedge-partitioned prediction) as well as implicit masks that are based on the difference between the two predictors. This allows combination of either two inter predictions or an inter and an intra prediction to be used in the same block.
A frame can reference 6 instead of 3 of the 8 available frame buffers for temporal (inter) prediction.
The Warped Motion (
warped_motion) and Global Motion (
global_motion) tools in AV1 aim to reduce redundant information in motion vectors by recognizing patterns arising from camera motion. They implement ideas that were tried to be exploited in preceding formats like e.g. MPEG-4 ASP, albeit with a novel approach that works in three dimensions. There can be a set of warping parameters for a whole frame offered in the bitstream, or blocks can use a set of implicit local parameters that get computed based on surrounding blocks.
For intra prediction, there are 56 (instead of 8) angles for directional prediction and weighted filters for per-pixel extrapolation. The "TrueMotion" predictor got replaced with a Paeth predictor which looks at the difference from the known pixel in the above left corner to the pixel directly above and directly left of the new one and then chooses the one that lies in direction of the smaller gradient as predictor. A palette predictor is available for blocks with very few colors like in some computer screen content. Correlations between the luminosity and the color information can now be exploited with a predictor for chroma blocks that is based on samples from the luma plane (
cfl). In order to reduce discontinuities along borders of inter-predicted blocks, predictors can be overlapped and blended with those of neighbouring blocks (overlapped block motion compensation).
AV1 has new optimized quantization matrices.
For the in-loop filtering step, the integration of Thor's constrained low-pass filter and Daala's directional deringing filter has been fruitful: The combined Constrained Directional Enhancement Filter (
cdef) exceeds the results of using the original filters separately or together.
It is an edge-directed conditional replacement filter that smoothes blocks with configurable (signaled) strength roughly along the direction of the dominant edge to eliminate ringing artifacts.
There is also the loop restoration filter (
loop_restoration) to remove blur artifacts due to block processing.
Film grain synthesis (
film_grain) improves coding of noisy signals using a parametric video coding approach.
Due to the randomness inherent to film grain noise, this signal component is traditionally either very expensive to code or prone to get damaged or lost, possibly leaving serious coding artefacts as residue. This tool circumvents these problems using analysis and synthesis, replacing parts of the signal with a visually similar synthetic texture, based solely on subjective visual impression instead of objective similarity. It removes the grain component from the signal, analyzes its non-random characteristics, and instead transmits only descriptive parameters to the decoder, which adds back a synthetic, pseudorandom noise signal that's shaped after the original component.
Daala's entropy coder (
daala_ec), a non-binary arithmetic coder, was selected for replacing VP9's binary entropy coder. The use of non-binary arithmetic coding helps evade patents, but also adds bit-level parallelism to an otherwise serial process, reducing clock rate demands on hardware implementations. This is to say that the effectiveness of modern binary arithmetic coding like CABAC is being approached using a greater alphabet than binary, hence greater speed, as in Huffman code (but not as simple and fast as Huffman code).
AV1 also gained the ability to adapt the symbol probabilities in the arithmetic coder per coded symbol instead of per frame (
Former experiments that have been fully integrated
This list is no longer complete. It is being rewritten in prose.
|Historic build-time flag||Explanation|
||A new prediction mode suitable for smooth regions|
||Constrained Directional Enhancement Filter: The merge of Daala's directional deringing filter + Thor's constrained low pass filter|
||An optimization of cdef|
||Chroma from Luma|
||Delta quantization step: Arbitrary adaptation of quantizers within a frame|
||The Daala entropy coder (a non-binary arithmetic coder)|
||Ability to choose different horizontal and vertical interpolation filters for subpixel motion compensation|
||Adapts symbol probabilities on the fly. As opposed to per frame, as in VP9.|
||A hardware optimization of daala_ec|
||Extended intra: 65 angular intra prediction modes|
||Extended reference frames: Adds more reference frames, as described in Adaptive multi-reference prediction using a symmetric framework|
||Option of no dependency across tile rows|
||Ability to choose different horizontal and vertical transforms|
||7-bit interpolation filters|
||Interpolate the reference samples before prediction to reduce the impact of quantization noise|
||Inter-intra prediction, part of wedge partitioned prediction|
||Remove blur artifacts due to block processing|
||Renamed from obmc. Overlapped Block Motion Compensation: Reduce discontinuities at block edges using different motion vectors|
||Code extra_bits using up to 5 non-adaptive symbols, starting from the LSB|
||Palette prediction: Intra codig tool for screen content.|
||Better methods for coding the motion vector predictors through implicit list of spatial and temporal neighbor MVs|
||Merge high/low bitdepth transforms|
||Recursive transform block partition and coding scheme|
||Wedge partitioned prediction|
Only explained experiments are listed.
|Enabled by default||Build-time flag||Explanation|
||A merge of former experiments cdef_dist and daala_dist. Daala_dist is Daala's distortion function.|
Notable abandoned features
Daala Transforms implements discrete cosine and sine transforms that its authors describe as "better in every way" than the
txmg set of transforms that prevailed in AV1. Both the
daala_tx experiments have merged high and low bitdepth code paths (unlike VP9), but
daala_tx achieved full embedding of smaller transforms within larger, as well as using fewer multiplies, which could have further reduced the cost of hardware implementations. The Daala transforms were kept as optional in the experimental codebase until late January 2018, but changing hardware blocks at a late stage was a general concern for delaying hardware availability.
The encoding complexity of Daala's Perceptual Vector Quantization was too much within the already complex framework of AV1. The Rate Distortion
dist_8x8 heuristic aims to speed up the encoder by a sizable factor, PVQ or not, but PVQ was ultimately dropped.
Quality and efficiency
In April 2017, using the 8 enabled experimental features at the time (of 77 total), Bitmovin was able to demonstrate favorable objective metrics, as well as visual results, compared to HEVC on the Sintel and Tears of Steel animated films. A follow-up comparison by Jan Ozer of Streaming Media Magazine confirmed this, and concluded that "AV1 is at least as good as HEVC now".
Ozer noted that his and Bitmovin's results contradicted a comparison by Fraunhofer Institute for Telecommunications from late 2016 that had found AV1 38.4% less efficient than HEVC, underperforming even H.264/AVC, and justified this discrepancy by having used encoding parameters endorsed by each encoder vendor, as well as having more features in the newer AV1 encoder.
Tests from Netflix showed that, based on measurements with PSNR and VMAF at 720p, AV1 could be about 25% more efficient than VP9 (libvpx), at the expense of a 4–10 fold increase in encoding complexity. Similar conclusions with respect to quality were drawn from a test conducted by Moscow State University researchers, where VP9 was found to require 31% and HEVC 22% more bitrate than AV1 for the same level of quality. The researchers found that the used AV1 encoder was operating at a speed “2500–3500 times lower than competitors”, while admitting that it has not been optimized yet.
In a comparison of AV1 against H.264 (x264) and VP9 (libvpx), Facebook showed about 45–50% bitrate savings over H.264 and about 40% over VP9 when using a constant quality encoding mode.
AOMedia provides a list of test results on their website.
Profiles and levels
AV1 defines three profiles for decoders which are Main, High, and Professional. The Main profile allows for a bit depth of 8- to 10-bits per sample with 4:0:0 (greyscale) and 4:2:0 chroma sampling. The High profile allows for a bit depth of 8- to 10-bits per sample with 4:0:0, 4:2:0, and 4:4:4 chroma sampling. The Professional profile allows for a bit depth of 8- to 12-bits per sample with 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling.
AV1 defines levels for decoders with maximum variables for levels ranging from 2.0 to 7.3. Example resolutions would be 426x240@30fps for level 2.0, 854x480@30fps for level 3.0, 1920x1080@30fps for level 4.0, 3840x2160@60fps for level 5.1, 3840x2160@120fps for level 5.2, and 7680x4320@120fps for level 6.2.
Like its predecessor VP9, AV1 can be used inside WebM container files alongside the Opus audio format. These formats are well supported among web browsers, with the exception of Safari (only has Opus support) and the discontinued Internet Explorer (prior to Edge) (see VP9 in HTML5 video).
From November 2017 onwards, nightly builds of the Firefox web browser contained preliminary support for AV1. Upon its release on 9 February 2018, version 3.0.0 of the VLC media player shipped with an experimental AV1 decoder. 
It is expected that Alliance members have interest in adopting the format, in respective ways, once the bitstream is frozen. The member companies represent several industries, including browser vendors (Apple, Google, Mozilla, Microsoft), content distributors (Apple, Amazon, Facebook, Google, Hulu, Netflix) and hardware designers (AMD, Apple, Arm, Broadcom, Intel, Nvidia). Video streaming service YouTube declared intent to transition to the new format as fast as possible, starting with highest resolutions within six months after the finalization of the bitstream format. Netflix "expects to be an early adopter of AV1".
According to Mukund Srinivasan, chief business officer of AOM member Ittiam, early hardware support will be dominated by software running on non-CPU hardware (such as GPGPU, DSP or shader programs, as is the case with some VP9 hardware implementations), as fixed-function hardware will take 12–18 months after bitstream freeze until chips are available, plus 6 months for products based on those chips to hit the market. The bitstream was finally frozen on 28 March 2018, meaning chips could be available sometime between March and August 2019. According to the above forecast, products based on chips could then be on the market at the end of 2019 or the beginning of 2020.
Mozilla researchers Nathan Egge and Michael Bebenita claimed in an interview in April 2018 that the web browser Mozilla Firefox would have AV1 support enabled by default by the end of 2018.
- Mozilla Firefox Nightly (since November 2017)
- Google Chrome (since 69)
- VLC media player (since 3.0)
- GStreamer (since 1.14)
- FFmpeg (since 4.0)
- MKVToolNix (since version 22)
- MediaInfo (since 18.03)
- Bitmovin Encoding (since v1.50.)
AV1 Still Image File Format (AVIF)
The AV1 Still Image File Format (AVIF) is a file format wrapping compressed images based on the Alliance for Open Media AV1 intra-frame encoding toolkit. AVIF supports High Dynamic Range (HDR) and wide color gamut (WCG) images as well as standard dynamic range (SDR). Only the intra-frame encoding toolkit is used in AVIF version 1.0. Using the intra-frame encoding mechanism from an existing video codec standard has precedent in WebP (using VP8) and HEIF (using HEVC).
The initial version of AVIF seeks to be simple, with just enough structure to allow the distribution of images based on the AV1 intra-frame coding toolset. At its core, AVIF 1.0 will allow for one or more images plus all supporting data needed to correctly reconstruct and display the images to be conveyed in a file. The ability to embed a thumbnail image will also be provided. An image sequence with suggested playback timing may be defined.
- AV1 intra-frame codec toolkit
- Multiple image storage: untimed unordered collection
- Animation: timed sequence of images
- Thumbnail image
- Alpha channel
- Extensible image metadata
- Zimmerman, Steven (15 May 2017). "Google's Royalty-Free Answer to HEVC: A Look at AV1 and the Future of Video Codecs". XDA Developers. Archived from the original on 14 June 2017. Retrieved 10 June 2017.
- Rick Merritt (EE Times), 30 June 2016: Video Compression Feels a Pinch
- Sebastian Grüner (19 July 2016). "Der nächste Videocodec soll 25 Prozent besser sein als H.265" (in German). golem.de. Retrieved 1 March 2017.
- Tsahi Levent-Levi (3 September 2015). "WebRTC Codec Wars: Rebooted". BlogGeek.me. Retrieved 1 March 2017.
The beginning of the end of HEVC/H.265 video codec
- "Alliance for Open Media established to deliver next-generation open media formats" (Press release). Alliance for Open Media. 1 September 2015. Retrieved 5 September 2015.[self-published source]
- Timothy B. Terriberry (18 January 2017). "Progress in the Alliance for Open Media" (video). linux.conf.au. Retrieved 1 March 2017.[self-published source]
- Timothy B. Terriberry (18 January 2017). "Progress in the Alliance for Open Media (slides)" (PDF). Retrieved 22 June 2017.[self-published source]
- Stephen Shankland (September 12, 2014). "Google's Web-video ambitions bump into hard reality". CNET. Retrieved September 13, 2014.
- Krishnan, Jai (22 November 2017). "Jai Krishnan from Google and AOMedia giving us an update on AV1". YouTube. Retrieved 22 December 2017.[self-published source]
- Terriberry, Timothy B. (2018-02-03). "AV1 Codec Update". FOSDEM. Retrieved 2018-02-08.[self-published source]
- Alliance for Open Media (28 March 2018). "The Alliance for Open Media Kickstarts Video Innovation Era with "AV1" Release" (Press release). Wakefield, Mass.
- Shilov, Anton. "Alliance for Open Media Releases Royalty-Free AV1 1.0 Codec Spec". AnandTech. Retrieved 2 April 2018.
- "AV1 Bitstream and Decoding Process Specification". Alliance for Open Media. Retrieved 26 June 2018.
This version 1.0.0 of the AV1 Bitstream Specification corresponds to the Git tag v1.0.0 in the AOMediaCodec/av1-spec project. Its content has been validated as consistent with the reference decoder provided by libaom v1.0.0.
- Larabel, Michael (25 June 2018). "AOMedia AV1 Codec v1.0.0 Appears Ready For Release". www.phoronix.com. Retrieved 27 June 2018.
- Hunter, Philip (2018-02-15). "Race on to bring AV1 open source codec to market, as code freezes". Videonet. Mediatel Limited. Retrieved 2018-03-19.
- Daede, Thomas (5 October 2017). "AV1 Update". YouTube. Retrieved 21 December 2017.[self-published source]
- Frost, Matt (31 July 2017). "VP9-AV1 Video Compression Update". Retrieved 21 November 2017.
Obviously, if we have an open source codec, we need to take very strong steps, and be very diligent in making sure that we are in fact producing something that's royalty free. So we have an extensive IP diligence process which involves diligence on both the contributor level – so when Google proposes a tool, we are doing our in-house IP diligence, using our in-house patent assets and outside advisors – that is then forwarded to the group, and is then again reviewed by an outside counsel that is engaged by the alliance. So that's a step that actually slows down innovation, but is obviously necessary to produce something that is open source and royalty free.
- Jan Ozer (28 March 2018). "AV1 Is Finally Here, but Intellectual Property Questions Remain". Retrieved 21 April 2018.
- Jan Ozer (June 2016). "VP9 Finally Comes of Age, But Is it Right for Everyone?". Retrieved 21 April 2018.
- Silvia Pfeiffer (December 2009). "Patents and their effect on Standards: Open video codecs for HTML5". Retrieved 21 April 2018.
- Leonardo Chiariglione (28 January 2018). "A crisis, the causes and a solution". Retrieved 21 April 2018.
two tracks in MPEG: one track producing royalty free standards (Option 1, in ISO language) and the other the traditional Fair Reasonable and Non Discriminatory (FRAND) standards (Option 2, in ISO language). (…) The Internet Video Coding (IVC) standard was a successful implementation of the idea (…). Unfortunately 3 companies made blank Option 2 statements (of the kind “I may have patents and I am willing to license them at FRAND terms”), a possibility that ISO allows. MPEG had no means to remove the claimed infringing technologies, if any, and IVC is practically dead.
- Leonardo Chiariglione (28 January 2018). "A crisis, the causes and a solution". Retrieved 21 April 2018.
How could MPEG achieve this? Thanks to its “business model” that can simply be described as: produce standards having the best performance as a goal, irrespective of the IPR involved.
- Neil McAllister, 1 September 2015: Web giants gang up to take on MPEG LA, HEVC Advance with royalty-free streaming codec – Joining forces for cheap, fast 4K video
- Steinar Midtskogen, Arild Fuldseth, Gisle Bjøntegaard, Thomas Davies (13 September 2017). "Integrating Thor tools into the emerging AV1 codec" (PDF). Retrieved 2 October 2017.
What can Thor add to VP9/AV1? Since Thor aims for reasonable compression at only moderate complexity, we considered features of Thor that could increase the compression efficiency of VP9 and/or reduce the computational complexity.
- Ozer, Jan (3 June 2016). "What is AV1?". Streaming Media. Information Today, Inc. Archived from the original on 26 November 2016. Retrieved 26 November 2016.
... Once available, YouTube expects to transition to AV1 as quickly as possible, particularly for video configurations such as UHD, HDR, and high frame rate videos ... Based upon its experience with implementing VP9, YouTube estimates that they could start shipping AV1 streams within six months after the bitstream is finalized. ...
- "examples/lossless_encoder.c". Git at Google. Alliance for Open Media. Retrieved 2017-10-29.[self-published source]
- Shankland, Stephen (2018-01-19). "Photo format from Google and Mozilla could leave JPEG in the dust". CNET. CBS Interactive. Retrieved 2018-01-28.
- Romain Bouqueau (12 June 2016). "A view on VP9 and AV1 part 1: specifications". GPAC Project on Advanced Content. Retrieved 1 March 2017.
- Jan Ozer, 26 May 2016: What Is VP9?
- "The fastest and safest AV1 encoder". Retrieved 2018-04-09.
- Ozer, Jan (30 August 2017). "AV1: A status update". Retrieved 14 September 2017.
- Cho, Yushin (30 August 2017). "Delete daala_dist and cdef-dist experiments in configure". Retrieved 2 October 2017.
Since those two experiments have been merged into the dist-8x8 experiment[self-published source]
- Jingning Han, Ankur Saxena, Vinay Melkote, and Kenneth Rose, Jointly Optimized Spatial Prediction and Block Transform for Video and Image Coding, IEEE Transactions on Image Processing, April 2012
- Alaiwan, Sebastien (2 November 2017). "Remove experimental flag of EXT_TX". Retrieved 23 November 2017.[self-published source]
- "Analysis of the emerging AOMedia AV1 video coding format for OTT use-cases" (PDF). Retrieved 19 September 2017.
- Converse, Alex (16 November 2015). "New video coding techniques under consideration for VP10 – the successor to VP9". YouTube. Retrieved 3 December 2016.[self-published source]
- "Decoding the Buzz over AV1 Codec". 9 June 2017. Retrieved 22 June 2017.[self-published source]
- Mukherjee, Debargha; Su, Hui; Bankoski, Jim; Converse, Alex; Han, Jingning; Liu, Zoe; Xu (Google Inc.), Yaowu, "An overview of new video coding tools under consideration for VP10 – the successor to VP9", SPIE Optical Engineering+ Applications, International Society for Optics and Photonics, 9599, doi:10.1117/12.2191104
- Alaiwan, Sebastien (31 October 2017). "Remove experimental flag of WARPED_MOTION". Retrieved 23 November 2017.[self-published source]
- Alaiwan, Sebastien (30 October 2017). "Remove experimental flag of GLOBAL_MOTION". Retrieved 23 November 2017.[self-published source]
- Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe (2017-09-19). "Novel inter and intra prediction tools under consideration for the emerging AV1 video codec". Applications of Digital Image Processing XL, proceedings of SPIE Optical Engineering + Applications 2017. International Society for Optics and Photonics. 10396: 103960F. doi:10.1117/12.2274022.
- Davies, Thomas (9 August 2017). "AOM_QM: enable by default". Retrieved 19 September 2017.[self-published source]
- Barbier, Frederic (10 November 2017). "Remove experimental flag of CDEF". Retrieved 23 October 2017.[self-published source]
- "Constrained Directional Enhancement Filter". 28 March 2017. Retrieved 15 September 2017.[self-published source]
- "Thor update". July 2017. Retrieved 2 October 2017.[self-published source]
- Egge, Nathan (25 May 2017). "This patch forces DAALA_EC on by default and removes the dkbool coder". Retrieved 14 September 2017.[self-published source]
- Egge, Nathan (14 February 2017). "Daala Entropy Coder in AV1" (PDF).[self-published source]
- Egge, Nathan (18 June 2017). "Remove the EC_ADAPT experimental flags". Retrieved 23 September 2017.[self-published source]
- Joshi, Urvang (1 June 2017). "Remove ALT_INTRA flag". Retrieved 19 September 2017.[self-published source]
- Mukherjee, Debargha (21 October 2017). "Remove CONFIG_CB4X4 config options". Retrieved 29 October 2017.[self-published source]
- "NETVC Hackathon Results IETF 98 (Chicago)". Retrieved 15 September 2017.
- "xiphmont | next generation video: Introducing AV1, part1: Chroma from Luma". xiphmont.dreamwidth.org. Retrieved 2018-04-10.
- Su, Hui (23 October 2017). "Remove experimental flag of chroma_sub8x8". Retrieved 29 October 2017.[self-published source]
- Mukherjee, Debargha (29 October 2017). "Remove compound_segment/wedge config flags". Retrieved 23 November 2017.[self-published source]
- Wang, Yunqing (12 December 2017). "Remove convolve_round/compound_round config flags". Retrieved 17 December 2017.[self-published source]
- Davies, Thomas (19 September 2017). "Remove delta_q experimental flag". Retrieved 2 October 2017.[self-published source]
- Terriberry, Timothy (25 August 2017). "Remove the EC_SMALLMUL experimental flag". Retrieved 15 September 2017.[self-published source]
- Alaiwan, Sebastien (2 October 2017). "Remove compile guards for CONFIG_EXT_INTER". Retrieved 29 October 2017.
This experiment has been adopted[self-published source]
- Alaiwan, Sebastien (16 October 2017). "Remove compile guards for CONFIG_EXT_REFS". Retrieved 29 October 2017.
This experiment has been adopted[self-published source]
- Zoe Liu; Debargha Mukherjee; Wei-Ting Lin; Paul Wilkins; Jingning Han; Yaowu Xu (4 July 2017). "Adaptive Multi-Reference Prediction Using A Symmetric Framework". Retrieved 29 October 2017.
- Davies, Thomas (19 September 2017). "Remove filter_7bit experimental flag". Retrieved 29 October 2017.[self-published source]
- Fuldseth, Arild (26 August 2017). "7-bit interpolation filters". Retrieved 29 October 2017.
Purpose: Reduce dynamic range of interpolation filter coefficients from 8 bits to 7 bits. Inner product for 8-bit input data can be stored in a 16-bit signed integer.[self-published source]
- Chen, Yue (30 October 2017). "Remove CONFIG_INTERINTRA". Retrieved 23 November 2017.[self-published source]
- Alaiwan, Sebastien (31 October 2017). "Remove experimental flag of MOTION_VAR". Retrieved 23 November 2017.[self-published source]
- Chen, Yue (13 October 2017). "Renamings for OBMC experiment". Retrieved 19 September 2017.[self-published source]
- Barbier, Frederic (15 November 2017). "Remove experimental flag of NEW_MULTISYMBOL". Retrieved 23 October 2017.[self-published source]
- "NEW_MULTISYMBOL: Code extra_bits using multi-symbols". Git at Google. Alliance for Open Media. Retrieved 2018-05-25.
- Liu, Zoe (7 November 2017). "Remove ONE_SIDED_COMPOUND experimental flag". Retrieved 23 November 2017.[self-published source]
- Joshi, Urvang (1 June 2017). "Remove PALETTE flag". Retrieved 19 September 2017.[self-published source]
- "Overview of the Decoding Process (Informative)". Retrieved 21 January 2018.
For certain types of image, such as PC screen content, it is likely that the majority of colors come from a very small subset of the color space. This subset is referred to as a palette. AV1 supports palette prediction, whereby non-inter frames are predicted from a palette containing the most likely colors.[self-published source]
- Barbier, Frederic (15 December 2017). "Remove experimental flag of PALETTE_DELTA_ENCODING". Retrieved 17 December 2017.[self-published source]
- Yoshi, Urvang (26 September 2017). "Remove rect_intra_pred experimental flag". Retrieved 2 October 2017.[self-published source]
- Mukherjee, Debargha (29 October 2017). "Remove experimental flag for rect-tx". Retrieved 23 November 2017.[self-published source]
- Mukherjee, Debargha (1 July 2016). "Rectangular transforms 4x8 & 8x4". Retrieved 14 September 2017.[self-published source]
- Alaiwan, Sebastien (27 April 2017). "Merge ref-mv into codebase". Retrieved 23 September 2017.[self-published source]
- Joshi, Urvang (9 November 2017). "Remove smooth_hv experiment flag". Retrieved 23 November 2017.[self-published source]
- Davies, Thomas (18 July 2017). "Remove the CONFIG_TILE_GROUPS experimental flag". Retrieved 19 September 2017.[self-published source]
- Chiang, Angie (31 July 2017). "Add txmg experiment". Retrieved 3 January 2018.
This experiment aims at merging lbd/hbd txfms[self-published source]
- Alaiwan, Sebastien (24 October 2017). "Remove compile guards for VAR_TX experiment". Retrieved 29 October 2017.
This experiment has been adopted[self-published source]
- "Add support to recursive transform block coding". Git at Google. Alliance for Open Media. Retrieved 2018-05-25.
- "AV1 experiment flags". 29 September 2017. Retrieved 2 October 2017.[self-published source]
- "Daala-TX" (PDF). 22 August 2017. Retrieved 26 September 2017.
Replaces the existing AV1 TX with the lifting implementation from Daala. Daala TX is better in every way: ● Fewer multiplies ● Same shifts, quantizers for all transform sizes and depths ● Smaller intermediaries ● Low-bitdepth transforms wide enough for high-bitdepth ● Less hardware area ● Inherently lossless[self-published source]
- Egge, Nathan (27 October 2017). "Daala Transforms in AV1".[self-published source]
- Egge, Nathan (1 December 2017). "Daala Transforms Update".[self-published source]
- Egge, Nathan (15 December 2017). "Daala Transforms Evaluation".[self-published source]
- Egge, Nathan (21 December 2017). "Daala Transforms Informational Discussion".[self-published source]
- "The Future of Video Codecs: VP9, HEVC, AV1". 2 November 2017. Retrieved 30 January 2018.
- Sebastian Grüner (9 June 2016). "Freie Videocodecs teilweise besser als H.265" (in German). golem.de. Retrieved 1 March 2017.
- "Results of Elecard's latest benchmarks of AV1 compared to HEVC". 24 April 2017. Retrieved 14 June 2017.
The most intriguing result obtained after analysis of the data lies in the fact that the developed codec AV1 is currently equal in its performance with HEVC. The given streams are encoded with AV1 update of 2017.01.31
- "Bitmovin Supports AV1 Encoding for VoD and Live and Joins the Alliance for Open Media". 18 April 2017. Retrieved 20 May 2017.[self-published source]
- Ozer, Jan. "HEVC: Rating the contenders" (PDF). Streaming Learning Center. Retrieved 22 May 2017.
- D. Grois, T, Nguyen, and D. Marpe, "Coding efficiency comparison of AV1/VP9, H.265/MPEG-HEVC, and H.264/MPEG-AVC encoders", IEEE Picture Coding Symposium (PCS) 2016 http://iphome.hhi.de/marpe/download/Preprint-Performance-Comparison-AV1-HEVC-AVC-PCS2016.pdf
- "Netflix on AV1". Streaming Learning Center. 2017-11-30. Retrieved 2017-12-08.
- "MSU Codec Comparison 2017" (PDF). 2018-01-17. Retrieved 2018-02-09.
- Ozer, Jan (2018-01-30). "AV1 Beats VP9 and HEVC on Quality, if You've Got Time, says Moscow State". Streaming Media Magazine. Retrieved 2018-02-09.
- "AV1 beats x264 and libvpx-vp9 in practical use case". Facebook Code. Retrieved 2018-04-17.
- Shankland, Stephen (2017-11-28). "Firefox now lets you try streaming-video tech that could be better than Apple's". CNET. Retrieved 2017-12-25.
- https://hacks.mozilla.org/2017/11/dash-playback-of-av1-video/[self-published source]
- "VLC 3.0 Vetinari". 2018-02-10. Retrieved 2018-02-10.
- Nick Stat (2018-01-04). "Apple joins group of tech companies working to improve online video compression". The Verge. Retrieved 2018-01-10.
- "DASH playback of AV1 video in Firefox – Mozilla Hacks - the Web developer blog". Mozilla Hacks – the Web developer blog. Retrieved 2018-03-20.
- "AV1 Decode - Chrome Platform Status". www.chromestatus.com. Retrieved 2018-06-28.
- "Can I use... Support tables for HTML5, CSS3, etc". caniuse.com. Retrieved 2018-06-28.
- "VLC release notes".
- "GStreamer 1.14 release notes". gstreamer.freedesktop.org. Retrieved 2018-03-20.
- "Download FFmpeg". www.ffmpeg.org. Retrieved 2018-04-22.
- "FFmpeg". www.ffmpeg.org. Retrieved 2018-04-22.
- "MKVToolNix v22.0.0 release notes".
- "MKVToolNix v22.0.0 released | mosu's Matroska stuff". www.bunkus.org. Retrieved 2018-05-03.
- "MediaInfo 18.03". Neowin. Retrieved 2018-05-03.
- "Encoding Release Notes". Bitmovin Knowledge Base. Retrieved 2018-07-09.
- "AV1 Still Image File Format (AVIF)". aomediacodec.github.io. Retrieved 2018-04-15.
|Wikimedia Commons has media related to AOMedia Video.|