Video and audio files that are deployed on platforms with constrain and limitations such as Web and Mobile devices need to be compressed from the original format.
The compression is the usage of a codec (enCOder/DECoder) algorithm. Codecs, as their name implies, are programs that compress and decompress data. (There are audio codecs as well as video codecs). Compression has two purposes: reduced file size to speed up transmission as well as reduces the needed data storage space on the destination device.
Most codecs are lossy, which means some of the data is lost during compression and cannot be recovered during decompression. Lossless codecs preserve all original data, and are therefore a 100% faithful transcription of the original data set when uncompressed. Lossless compression is not well suited to the web or mobile devices.
A codec can either be a physical device or a software based process that, on the encoding side, manages the compression of raw digital audio or video data into files of reduced size, optimizing both download and playback performance. On the decoding side, the process is reversed, with the codec uncompressing the file to produce a high quality facsimile of the original content. Because the object of compression is to reduce the overall file size and streaming bandwidth requirements for a given segment of video, it is typically necessary or desirable to “throw away” some of the data during the compression process, meaning that when the codec reproduces content, it will have incrementally lower production values than the original. Compression, which discards some data in the compression and optimization process, is called lossy data compression.
At this point, we depart from the domain of science, and cross over into craft and art, because to create an acceptable result, compression algorithms must strike a complex balance between the visual quality of video and the volume of data necessary to render it. For purposes of multimedia content, the key measure of codec performance is the bit rate. In the context of transmitting multimedia data over the Internet or mobile carrier connections, bit rate quantifies the number of bits required per increment of playback time in order for the viewer to see smooth, uninterrupted content. For streaming video, this degree of playback quality is also called goodput. Goodput is the effective transmission rate supporting what the user actually sees on their device – in other words, it is the amount of data transferred after deducting things like internet, network and datalink layer protocol overhead; network congestion; and retransmission of data that was corrupted or lost in transit. The ability to empirically measure the performance of various codecs is key, because they have different strengths, and therefore, different applications.
Essentially, codecs are optimization tools, and they are many and diverse, often with thriving application genres based on them. The choice of a particular codec is driven by what rendering or transmission characteristics are the focus of optimization; what codecs a developer can reasonably assume to be present on the target platforms; and what post processing tools the developer has available for converting raw data into a video file format. It’s unsurprising that there is a great deal of competition among the developers of codec technology, because achieving a big advance in compression without a loss of quality would have tremendous commercial value. But, on the other hand, if all codec technologies were secret, there would be crippling fragmentation resulting from dozens of incompatible proprietary file formats for encoded video. This problem is neatly solved by an extensive, widely embraced standards-making process for video encoding.
Video codec designs are precisely specified by the Motion Picture Experts Group (MPEG), an international body, which includes 350 members representing media industries, universities, and research institutions. MPEG is chartered by the International Standards Organization (ISO) and is tasked with publishing standards documents that detail how various codecs work. What‘s interesting about this is that MPEG’s published specifications assume that the compression of video files is asymmetrical. In this sense, asymmetrical means that is it’s far more complex and difficult to compress data than to decompress it. As a standards making group, MPEG is exclusively interested in creating a framework for interoperability among various vendors’ codecs and products. This effectively means that only the decoding process needs to be enshrined in a public standard. The encoding process is not constrained by a published MPEG standard. As long as the compressed video files can be decoded as described in the MPEG spec, innovators are encouraged to design new and better encoders, achieving advances in optimizations, while secure in the knowledge they’ll reap the accompanying economic benefits. As encoder technology moves forward, the deployed decoder technology will continue to work, because the decoder side has no knowledge of encoder implementation and can’t be broken by encoder evolution.
Since there is a great deal at stake, the exact strategies of popular encoder designs are usually not public, but the nature of general recent advances is an open secret. Most codecs have transitioned from logic which compresses video data frame-by-frame to an object based model, where the encoder detects regions of frames that don’t change rapidly and caches those semi static portions. This is a tremendous advantage for bandwidth constrained scenarios like mobile video, because it prevents transmission of redundant data.
Both transmission speed and quality of the video rendering produced by decoding the results of various encoders can differ dramatically from one encoder implementation to another. In addition, there can be significant trade-offs in video codecs’ decoder runtime performance and resource utilization. It’s a subtle point, but an important one: Codec standards enable interoperability, but they do not imply uniformity of performance or quality across mobile devices. This potentially complicates life for content designers and developers, because it is necessary to know what codec is going to play your content back in order to ensure that video files provide acceptable playback performance. On desktop and laptop computers, there are frequently a variety of codecs available, and the presence or absence of a single one is rarely an issue for content developers. In any case, a desktop video app can request the user to download a needed codec if it isn’t already present. Not so with mobile devices.
Let’s go over some of the most popular audio and video formats.
1: FLV Format
FLV is the most popular video format available on the Internet with some of the best websites engaging their viewers with Flash based videos. This Flash Video format is also available Flash Player 6 as well as available on mobile phones from Flash Lite 3 player.
An FLV file encodes synchronized audio and video streams. The audio and video data within FLV files are encoded in the same way as audio and video within SWF files. SWF files published for Flash Player 6, Flash Player can exchange audio, video, and data over RTMP connections with Adobe Flash Media Server as well.
It is estimated that a one-minute video consumes 2 – 3MB of RAM, while a five-minute video consumes an average of 3 – 4MB. Longer videos play without requiring a linear increase in memory. This is true for progressive, streaming, local, and/or remote.
2: F4V And F4P Format
F4V is associated with FLV and many times you will see them attached together as the same format. What is F4V & F4P is simply the Adobe’s wrapper for the H.264 video. The reason there is even a need for a wrapper is to overcome the limitations that the H.264 format doesn’t support features such as alpha channel or cue points. F4V is available from Flash Player 9.0.r115 and higher. The format maintains dimensions and frame rate of source. The format also eliminates black borders. F4P is the protected video format.
FLV and F4V have open specification:
3: MPEG-4 Format
MPEG-4 Part 14 also known as MP4 is a collection of audio and video encoding. MPEG-4 is considered the standard as many software companies such as Apple and Microsoft support the format. MPEG-4 is a container that allows you to combine audio and video (as well as other streams) into a single file. MPEG-4 video codec and H.264 are the included standards for video coding and compression. H.264 is the evolutionary step and improvement, providing improved capacity in terms of quality and efficiency. The format is available from Flash Player 9 and above.
4: H.264 Format
H.264 is the next-generation video compression technology in the MPEG-4 standard, also known as MPEG-4 Part 10. H.264 delivers excellent video quality across the entire bandwidth spectrum — right from 3G to High Definition Video Players. This format is preferred because it produces an incredible quality of video with the smallest amount of video data. This means you see crisp, clear video in much smaller files, saving you bandwidth and storage costs over previous generations of video codecs. The format is available from Flash Player 9 as well as Flash Lite 3.1.
5: MP3 format
Part of MPEG-1 standard, and also known as MPEG-1 Audio Layer 3. MP3 is a patented digital audio encoding format. The format is a popular audio format and the standard for lossy data compression of digital audio files.
Flash Player 6.0r40 and later are supporting MP3 and in fact audio in Flash Video files usually encoded as MP3. MP3 also supports ID3 metadata container, which allow passing data about the music file.
6: Advanced Audio Coding (AAC)
Supported by Flash Player 9 Update 3 and later Advanced Audio Coding (AAC) was designed to be the successor of the MP3 format. AAC is high-efficiency (HE) and high-fidelity (HiFi), low-bandwidth audio codec and a standardized, lossy compression data and encoding for digital audio. AAC is a higher quality format than MP3 and generally achieves better sound quality than MP3 at similar bit rates. The format is often packaged in a video format container.
7: MOV format
Available from Flash Player 9 Update 3 and up you can play MOV container using MPEG-4 codecs, it mostly interchangeable in a QuickTime-only environment. This is especially true on hardware devices, such as the Sony PSP and various DVD players; on the software side, most DirectShow.
8: 3GP and 3GPP Format
3GP is a simplified version of the MPEG-4 format; it is designed for mobile use. 3GP is based on MPEG-4 and H.263 video, and AAC or AMR audio The format is design to optimize the video content for mobile and specifically built to accommodate low bandwidths and little storage. 3GP format is a popular format within mobile devices and many support the 3GP format. The file extension is either .3gp for GSM-based Phones or .3g2 for CDMA-based Phone.
9: F4A and F4B format
From Flash Player 9 Adobe supports F4A & F4B audio/mp4 audio for Adobe Flash Player. The format is nothing more than MP4 audio file. F4A stands for an audio file while F4B stands for an audio book. The reason the format even exists is bridge and avoid compatibility issues between different platform such as Adobe Flash Player, Quicktime, iPod etc.
10: M4V and M4A
Flash Player 9 Update 3 supports M4V and M4A. While MP4 is the official extension, apple introduced M4V, M4A formats and are the standard file formats for videos and audio for iTune store, iPod and PlayStation portables.
M4V, M4A file formats are identical to MP4 and can be renamed to MP4. M4V stands for video while M4A by audio layer of MP4 movies.
Notice that M4V files contain DRM and the purchasing users info. You can use Requiem (http://undrm.info/remove-DRM-protection/Requiem-freeware-Mac-and-PC-DRM-remover-for-iTunes-files.htm) to remove the DRM.
So why did Apple created the format anyway? The difference in file extension allows to associate the file type with iTune and when you double click the file format you’ll have iTune opens up in case you have iTune installed. M4V are often used for movies, TV episodes, and music videos.