• Skip to main content
  • Skip to search
  • Skip to select language
  • Sign up for free
  • English (US)

Codecs used by WebRTC

The WebRTC API makes it possible to construct websites and apps that let users communicate in real time, using audio and/or video as well as optional data and other information. To communicate, the two devices need to be able to agree upon a mutually-understood codec for each track so they can successfully communicate and present the shared media. This guide reviews the codecs that browsers are required to implement as well as other codecs that some or all browsers support for WebRTC.

Containerless media

WebRTC uses bare MediaStreamTrack objects for each track being shared from one peer to another, without a container or even a MediaStream associated with the tracks. Which codecs can be within those tracks is not mandated by the WebRTC specification. However, RFC 7742 specifies that all WebRTC-compatible browsers must support VP8 and H.264 's Constrained Baseline profile for video, and RFC 7874 specifies that browsers must support at least the Opus codec as well as G.711 's PCMA and PCMU formats.

These two RFCs also lay out options that must be supported for each codec, as well as specific user comfort features such as echo cancellation. This guide reviews the codecs that browsers are required to implement as well as other codecs that some or all browsers support for WebRTC.

While compression is always a necessity when dealing with media on the web, it's of additional importance when videoconferencing in order to ensure that the participants are able to communicate without lag or interruptions. Of secondary importance is the need to keep the video and audio synchronized, so that the movements and any ancillary information (such as slides or a projection) are presented at the same time as the audio that corresponds.

General codec requirements

Before looking at codec-specific capabilities and requirements, there are a few overall requirements that must be met by any codec configuration used with WebRTC.

Unless the SDP specifically signals otherwise, the web browser receiving a WebRTC video stream must be able to handle video at 20 FPS at a minimum resolution of 320 pixels wide by 240 pixels tall. It's encouraged that video be encoded at a frame rate and size no lower than that, since that's essentially the lower bound of what WebRTC generally is expected to handle.

SDP supports a codec-independent way to specify preferred video resolutions ( RFC 6236 . This is done by sending an a=imageattr SDP attribute to indicate the maximum resolution that is acceptable. The sender is not required to support this mechanism, however, so you have to be prepared to receive media at a different resolution than you requested. Beyond this simple maximum resolution request, specific codecs may offer further ways to ask for specific media configurations.

Supported video codecs

WebRTC establishes a baseline set of codecs which all compliant browsers are required to support. Some browsers may choose to allow other codecs as well.

Below are the video codecs which are required in any fully WebRTC-compliant browser, as well as the profiles which are required and the browsers which actually meet the requirement.

For details on WebRTC-related considerations for each codec, see the sub-sections below by following the links on each codec's name.

Complete details of what video codecs and configurations WebRTC is required to support can be found in RFC 7742: WebRTC Video Processing and Codec Requirements . It's worth noting that the RFC covers a variety of video-related requirements, including color spaces (sRGB is the preferred, but not required, default color space), recommendations for webcam processing features (automatic focus, automatic white balance, automatic light level), and so on.

Note: These requirements are for web browsers and other fully-WebRTC compliant products. Non-WebRTC products that are able to communicate with WebRTC to some extent may or may not support these codecs, although they're encouraged to by the specification documents.

In addition to the mandatory codecs, some browsers support additional codecs as well. Those are listed in the following table.

VP8, which we describe in general in the main guide to video codecs used on the web , has some specific requirements that must be followed when using it to encode or decode a video track on a WebRTC connection.

Unless signaled otherwise, VP8 will use square pixels (that is, pixels with an aspect ratio of 1:1).

Other notes

The network payload format for sharing VP8 using RTP (such as when using WebRTC) is described in RFC 7741: RTP Payload Format for VP8 Video .

AVC / H.264

Support for AVC's Constrained Baseline (CB) profile is required in all fully-compliant WebRTC implementations. CB is a subset of the main profile, and is specifically designed for low-complexity, low-delay applications such as mobile video and videoconferencing, as well as for platforms with lower performing video processing capabilities.

Our overview of AVC and its features can be found in the main video codec guide.

Special parameter support requirements

AVC offers a wide array of parameters for controlling optional values. In order to improve reliability of WebRTC media sharing across multiple platforms and browsers, it's required that WebRTC endpoints that support AVC handle certain parameters in specific ways. Sometimes this means a parameter must (or must not) be supported. Sometimes it means requiring a specific value for a parameter, or that a specific set of values be allowed. And sometimes the requirements are more intricate.

Parameters which are useful but not required

These parameters don't have to be supported by the WebRTC endpoint, and their use is not required either. Their use can improve the user experience in various ways, but don't have to be used. Indeed, some of these are pretty complicated to use.

If specified and supported by the software, the max-br parameter specifies the maximum video bit rate in units of 1,000 bps for VCL and 1,200 bps for NAL. You'll find details about this on page 47 of RFC 6184 .

If specified and supported by the software, max-cpb specifies the maximum coded picture buffer size. This is a fairly complicated parameter whose unit size can vary. See page 45 of RFC 6184 for details.

If specified and supported, max-dpb indicates the maximum decoded picture buffer size, given in units of 8/3 macroblocks. See RFC 6184, page 46 for further details.

If specified and supported by the software, max-fs specifies the maximum size of a single video frame, given as a number of macroblocks.

If specified and supported by the software, this value is an integer specifying the maximum rate at which macroblocks should be processed per second (in macroblocks per second).

If specified and supported by the software, this specifies an integer stating the maximum static macroblock processing rate in static macroblocks per second (using the hypothetical assumption that all macroblocks are static macroblocks).

Parameters with specific requirements

These parameters may or may not be required, but have some special requirement when used.

All endpoints are required to support mode 1 (non-interleaved mode). Support for other packetization modes is optional, and the parameter itself is not required to be specified.

Sequence and picture information for AVC can be sent either in-band or out-of-band. When AVC is used with WebRTC, this information must be signaled in-band; the sprop-parameter-sets parameter must therefore not be included in the SDP.

Parameters which must be specified

These parameters must be specified whenever using AVC in a WebRTC connection.

All WebRTC implementations are required to specify and interpret this parameter in their SDP, identifying the sub-profile used by the codec. The specific value that is set is not defined; what matters is that the parameter be used at all. This is useful to note, since in RFC 6184 ("RTP Payload Format for H.264 Video"), profile-level-id is entirely optional.

Other requirements

For the purposes of supporting switching between portrait and landscape orientations, there are two methods that can be used. The first is the video orientation (CVO) header extension to the RTP protocol. However, if this isn't signaled as supported in the SDP, then it's encouraged that browsers support Display Orientation SEI messages, though not required.

Unless signaled otherwise, the pixel aspect ratio is 1:1, indicating that pixels are square.

The payload format used for AVC in WebRTC is described in RFC 6184: RTP Payload Format for H.264 Video . AVC implementations for WebRTC are required to support the special "filler payload" and "full frame freeze" SEI messages; these are used to support switching among multiple input streams seamlessly.

Supported audio codecs

The audio codecs which RFC 7874 mandates that all WebRTC-compatible browsers must support are shown in the table below.

See below for more details about any WebRTC-specific considerations that exist for each codec listed above.

It's useful to note that RFC 7874 defines more than a list of audio codecs that a WebRTC-compliant browser must support; it also provides recommendations and requirements for special audio features such as echo cancellation, noise reduction, and audio leveling.

Note: The list above indicates the minimum required set of codecs that all WebRTC-compatible endpoints are required to implement. A given browser may also support other codecs; however, cross-platform and cross-device compatibility may be at risk if you use other codecs without carefully ensuring that support exists in all browsers your users might choose.

In addition to the mandatory audio codecs, some browsers support additional codecs as well. Those are listed in the following table.

Internet Low Bitrate Codec ( iLBC ) is an open-source narrow-band codec developed by Global IP Solutions and now Google, designed specifically for streaming voice audio. Google and some other browser developers have adopted it for WebRTC.

The Internet Speech Audio Codec ( iSAC ) is another codec developed by Global IP Solutions and now owned by Google, which has open-sourced it. It's used by Google Talk, QQ, and other instant messaging clients and is specifically designed for voice transmissions which are encapsulated within an RTP stream.

Comfort noise ( CN ) is a form of artificial background noise which is used to fill gaps in a transmission instead of using pure silence. This helps to avoid a jarring effect that can occur when voice activation and similar features cause a stream to stop sending data temporarily—a capability known as Discontinuous Transmission (DTX). In RFC 3389 , a method for providing an appropriate filler to use during silences.

Comfort Noise is used with G.711, and may potentially be used with other codecs that don't have a built-in CN feature. Opus, for example, has its own CN capability; as such, using RFC 3389 CN with the Opus codec is not recommended.

An audio sender is never required to use discontinuous transmission or comfort noise.

The Opus format, defined by RFC 6716 is the primary format for audio in WebRTC. The RTP payload format for Opus is found in RFC 7587 . You can find more general information about Opus and its capabilities, and how other APIs can support Opus, in the corresponding section of our guide to audio codecs used on the web .

Both the speech and general audio modes should be supported. Opus's scalability and flexibility are useful when dealing with audio that may have varying degrees of complexity. Its support of in-band stereo signals allows support for stereo without complicating the demultiplexing process.

The entire range of bit rates supported by Opus (6 kbps to 510 kbps) is supported in WebRTC, with the bit rate allowed to be dynamically changed. Higher bit rates typically improve quality.

Bit rate recommendations

Given a 20 millisecond frame size, the following table shows the recommended bit rates for various forms of media.

The bit rate may be adjusted at any time. In order to avoid network congestion, the average audio bit rate should not exceed the available network bandwidth (minus any other known or anticipated added bandwidth requirements).

G.711 defines the format for Pulse Code Modulation ( PCM ) audio as a series of 8-bit integer samples taken at a sample rate of 8,000 Hz, yielding a bit rate of 64 kbps. Both µ-law and A-law encodings are allowed.

G.711 is defined by the ITU and its payload format is defined in RFC 3551, section 4.5.14 .

WebRTC requires that G.711 use 8-bit samples at the standard 64 kbps rate, even though G.711 supports some other variations. Neither G.711.0 (lossless compression), G.711.1 (wideband capability), nor any other extensions to the G.711 standard are mandated by WebRTC.

Due to its low sample rate and sample size, G.711 audio quality is generally considered poor by modern standards, even though it's roughly equivalent to what a landline telephone sounds like. It is generally used as a least common denominator to ensure that browsers can achieve an audio connection regardless of platforms and browsers, or as a fallback option in general.

Specifying and configuring codecs

Getting the supported codecs.

Because a given browser and platform may have different availability among the potential codecs—and may have multiple profiles or levels supported for a given codec—the first step when configuring codecs for an RTCPeerConnection is to get the list of available codecs. To do this, you first have to establish a connection on which to get the list.

There are a couple of ways you can do this. The most efficient way is to use the static method RTCRtpSender.getCapabilities() (or the equivalent RTCRtpReceiver.getCapabilities() for a receiver), specifying the type of media as the input parameter. For example, to determine the supported codecs for video, you can do this:

Now codecList is an array codec objects, each describing one codec configuration. Also present in the list will be entries for retransmission (RTX), redundant coding (RED), and forward error correction (FEC).

If the connection is in the process of starting up, you can use the icegatheringstatechange event to watch for the completion of ICE candidate gathering, then fetch the list.

The event handler for icegatheringstatechange is established; in it, we look to see if the ICE gathering state is complete , indicating that no further candidates will be collected. The method RTCPeerConnection.getSenders() is called to get a list of all the RTCRtpSender objects used by the connection.

With that in hand, we walk through the list of senders, looking for the first one whose MediaStreamTrack indicates that it's kind is video , indicating that the track's data is video media. We then call that sender's getParameters() method and set codecList to the codecs property in the returned object, and then return to the caller.

If no video track is found, we set codecList to null .

On return, then, codecList is either null to indicate that no video tracks were found or it's an array of RTCRtpCodecParameters objects, each describing one permitted codec configuration. Of special importance in these objects: the payloadType property, which is a one-byte value which uniquely identifies the described configuration.

Note: The two methods for obtaining lists of codecs shown here use different output types in their codec lists. Be aware of this when using the results.

Customizing the codec list

Once you have a list of the available codecs, you can alter it and then send the revised list to RTCRtpTransceiver.setCodecPreferences() to rearrange the codec list. This changes the order of preference of the codecs, letting you tell WebRTC to prefer a different codec over all others.

In this sample, the function changeVideoCodec() takes as input the MIME type of the codec you wish to use. The code starts by getting a list of all of the RTCPeerConnection 's transceivers.

Then, for each transceiver, we get the kind of media represented by the transceiver from the RTCRtpSender 's track's kind . We also get the lists of all codecs supported by the browser for both sending and receiving video, using the getCapabilities() static method of both RTCRtpSender and RTCRtpReceiver .

If the media is video, we call a method called preferCodec() for both the sender's and receiver's codec lists; this method rearranges the codec list the way we want (see below).

Finally, we call the RTCRtpTransceiver 's setCodecPreferences() method to specify that the given send and receive codecs are allowed, in the newly rearranged order.

That's done for each transceiver on the RTCPeerConnection ; once all of the transceivers have been updated, we call the onnegotiationneeded event handler, which will create a new offer, update the local description, send the offer along to the remote peer, and so on, thereby triggering the renegotiation of the connection.

The preferCodec() function called by the code above looks like this to move a specified codec to the top of the list (to be prioritized during negotiation):

This code is just splitting the codec list into two arrays: one containing codecs whose MIME type matches the one specified by the mimeType parameter, and the other with all the other codecs. Once the list has been split up, they're concatenated back together with the entries matching the given mimeType first, followed by all of the other codecs. The rearranged list is then returned to the caller.

Default codecs

Unless otherwise specified, the default—or, more accurately, preferred—codecs requested by each browser's implementation of WebRTC are shown in the table below.

Choosing the right codec

Before choosing a codec that isn't one of the mandatory codecs (VP8 or AVC for video and Opus or PCM for audio), you should seriously consider the potential drawbacks: in particular, only these codecs can be generally assumed to be available on essentially all devices that support WebRTC.

If you choose to prefer a codec other than the mandatory ones, you should at least allow for fallback to one of the mandatory codecs if support is unavailable for the codec you prefer.

In general, if it's available and the audio you wish to send has a sample rate greater than 8 kHz, you should strongly consider using Opus as your primary codec. For voice-only connections in a constrained environment, using G.711 at an 8 kHz sample rate can provide an acceptable experience for conversation, but typically you'll use G.711 as a fallback option, since there are other options which are more efficient and sound better, such as Opus in its narrowband mode.

There are a number of factors that come into play when deciding upon a video codec (or set of codecs) to support.

Licensing terms

Before choosing a video codec, make sure you're aware of any licensing requirements around the codec you select; you can find information about possible licensing concerns in our main guide to video codecs used on the web . Of the two mandatory codecs for video—VP8 and AVC/H.264—only VP8 is completely free of licensing requirements. If you select AVC, make sure you're; aware of any potential fees you may need to pay; that said, the patent holders have generally said that most typical website developers shouldn't need to worry about paying the license fees, which are typically focused more on the developers of the encoding and decoding software.

Warning: The information here does not constitute legal advice! Be sure to confirm your exposure to liability before making any final decisions where potential exists for licensing issues.

Power needs and battery life

Another factor to consider—especially on mobile platforms—is the impact a codec may have on battery life. If a codec is handled in hardware on a given platform, that codec is likely to allow for much better battery life and less heat production.

For example, Safari for iOS and iPadOS introduced WebRTC with AVC as the only supported video codec. AVC has the advantage, on iOS and iPadOS, of being able to be encoded and decoded in hardware. Safari 12.1 introduced support for VP8 within IRC, which improves interoperability, but at a cost—VP8 has no hardware support on iOS devices, so using it causes increased processor impact and reduced battery life.

Performance

Fortunately, VP8 and AVC perform similarly from an end-user perspective, and are equally adequate for use in videoconferencing and other WebRTC solutions. The final decision is yours. Whichever you choose, be sure to read the information provided in this article about any particular configuration issues you may need to contend with for that codec.

Keep in mind that choosing a codec that isn't on the list of mandatory codecs likely runs the risk of selecting a codec which isn't supported by a browser your users might prefer. See the article Handling media support issues in web content to learn more about how to offer support for your preferred codecs while still being able to fall back on browsers that don't implement that codec.

Security implications

There are interesting potential security issues that come up while selecting and configuring codecs. WebRTC video is protected using Datagram Transport Layer Security ( DTLS ), but it is theoretically possible for a motivated party to infer the amount of change that's occurring from frame to frame when using variable bit rate (VBR) codecs, by monitoring the stream's bit rate and how it changes over time. This could potentially allow a bad actor to infer something about the content of the stream, given the ebb and flow of the bit rate.

For more about security considerations when using AVC in WebRTC, see RFC 6184, section 9: RTP Payload Format for H.264 Video: Security Considerations .

RTP payload format media types

It may be useful to refer to the IANA 's list of RTP payload format media types; this is a complete list of the MIME media types defined for potential use in RTP streams, such as those used in WebRTC. Most of these are not used in WebRTC contexts, but the list may still be useful.

See also RFC 4855 , which covers the registry of media types.

  • Introduction to WebRTC protocols
  • WebRTC connectivity
  • Guide to video codecs used on the web
  • Guide to audio codecs used on the web
  • Digital video concepts
  • Digital audio concepts

On the Road to WebRTC 1.0, Including VP8

Mar 12, 2019

by Youenn Fablet

Safari 11 was the first Safari version to support WebRTC. Since then, we have worked to continue improving WebKit’s implementation and compliance with the spec. I am excited to announce major improvements to WebRTC in Safari 12.1 on iOS 12.2 and macOS 10.14.4 betas, including VP8 video codec support, video simulcast support and Unified Plan SDP (Session Description Protocol) experimental support.

VP8 Video Codec

The VP8 video codec is widely used in existing WebRTC solutions. It is now supported as a WebRTC-only video codec in Safari 12.1 on both iOS and macOS betas. By supporting both VP8 and H.264, Safari 12.1 can exchange video with any other WebRTC endpoint. H.264 is the default codec for Safari because it is backed by hardware acceleration and tuned for real-time communication. This provides a great user experience and power efficiency. We found that, on an iPhone 7 Plus in laboratory conditions, the use of H.264 on a 720p video call increases the battery life by up to an hour compared to VP8. With H.264, VP8 and Unified Plan, Safari can mix H.264 and VP8 on a single connection. It is as simple as doing the following:

Video Simulcast

To further improve WebRTC support for multi-party video conferencing, simulcast is now supported for both H.264 and VP8. Kudos to the libwebrtc community, including Cosmo Software , for making great progress in that important area. Simulcast is a technique that encodes the same video content with different encoding parameters, typically different frame sizes and bit rates. This is particularly useful when the same content is sent to several clients through a central server, called an SFU . As the clients might have different constraints (screen size, network conditions), the SFU is able to send the most suitable stream to each client. Each individual encoding can be controlled using RTCRtpSender.setParameters on the sender side. Simulcast is currently activated using SDP munging . Future work should allow simulcast activation using RTCPeerConnection.addTransceiver , as per specification.

Unified Plan

WebRTC uses SDP as the format for negotiating the configuration of a connection. While previous versions of Safari used Plan B SDP only, Safari is now transitioning to the standardized version known as Unified Plan SDP. Unified Plan SDP can express WebRTC configurations in a much more flexible way, as each audio or video stream transmission can be configured independently.

If your website uses at most one audio and one video track per connection, this transition should not require any major changes. If your website uses connections with more audio or video tracks, adoption may be required. With Unified Plan SDP enabled, the support of the WebRTC 1.0 API, in particular the transceiver API, is more complete and spec-compliant than ever. To detect whether Safari uses Unified Plan, you can use feature detection:

Unified Plan is an experimental feature that is currently turned on by default in Safari Technology Preview and turned off by default in Safari in iOS 12.2 and macOS 10.14.4 betas. This can be turned on using the Develop menu on macOS and Safari settings on iOS.

Additional API updates

Safari also comes with additional improvements, including better support of capture device selection, experimental support of the screen capture API , and deprecation of the WebRTC legacy API.

Web applications sometimes want to select the same capture devices used on a previous call. Device IDs will remain stable across browsing sessions as soon as navigator.mediaDevices.getUserMedia is successfully called once. Your web page can implement persistent device selection as follows:

Existing fingerprinting mitigations remain in place, including the filtering of information provided by navigator.mediaDevices.enumerateDevices as detailed in the blog post, “A Closer Look Into WebRTC” .

Initial support of the screen capture API is now available as an experimental feature in Safari Technology Preview. By calling navigator.mediaDevices.getDisplayMedia , a web application can capture the main screen on macOS.

Following the strategy detailed in “A Closer Look Into WebRTC” , the WebRTC legacy API was disabled by default in Safari 12.0. Support for the WebRTC legacy API is removed from iOS 12.2 and macOS 10.14.4 betas. Should your application need support of the WebRTC legacy API, we recommend the use of the open source adapter.js library as a polyfill.

We always appreciate your feedback. Send us bug reports , or feel free to tweet @webkit on Twitter, or email [email protected] .

You are using an outdated browser. Please upgrade your browser to improve your experience.

Apple adds WebM Web Audio support to Safari in latest iOS 15 beta

AppleInsider Staff's Avatar

Currently available as an option in the Experimental WebKit Features section of Safari's advanced settings, WebM Web Audio and the related WebM MSE parser are two parts of the wider WebM audiovisual media file format developed by Google.

An open-source initiative, WebM presents a royalty-free alternative to common web video streaming technology and serves as a container for the VP8 and VP9 video codecs. As it relates to Safari, WebM Web Audio provides support for the Vorbis and Opus audio codecs.

Code uncovered by 9to5Mac reveals the WebM audio codec should be enabled by default going forward, suggesting that Apple will officially adopt the standard when iOS 15 sees release.

Apple added support for the WebM video codec on Mac when a second macOS Big Sur 11.3 beta was issued in February . The video portion of WebM has yet to see implementation on iOS, but that could soon change with the adoption of WebM's audio assets.

WebM dates back to 2010, but Apple has been reluctant to bake the format into its flagship operating systems. Late co-founder Steve Jobs once called the format "a mess" that "wasn't ready for prime time."

As AppleInsider noted when WebM hit macOS, Apple might be angling to support high-resolution playback from certain streaming services like YouTube, which rely on VP9 to stream 4K content. The validation of WebM Web Audio is a step in that direction.

Apple is expected to launch iOS 15 this fall alongside a slate of new iPhone and Apple Watch models.

Sponsored Content

article thumbnail

Clean junk files from your Mac with Intego Washing Machine X9

Top stories.

article thumbnail

Apple's iOS 18 AI will be on-device preserving privacy, and not server-side

article thumbnail

How iOS Web Distribution works in the EU in iOS 17.5

article thumbnail

iPhone 16 Pro 256GB rumor makes sense, but is by a known falsifier

article thumbnail

Apple is researching how to make the ultimate MagSafe wallet and iPhone carrying case

article thumbnail

When to expect every Mac to get the AI-based M4 processor

Featured deals.

article thumbnail

Deals: get a free $40 gift card with a Costco membership

Latest comparisons.

article thumbnail

M3 15-inch MacBook Air vs M3 14-inch MacBook Pro — Ultimate buyer's guide

article thumbnail

M3 MacBook Air vs M1 MacBook Air — Compared

article thumbnail

M3 MacBook Air vs M2 MacBook Air — Compared

Latest news.

article thumbnail

Fear of Nintendo's wrath is keeping emulators off of the App Store

Despite Apple's recent rule change, it has been a bumpy few days for emulators on the App Store as small developers fear the wrath of Nintendo and others.

author image

Jerusalem Flag autoprediction bug is fixed in iOS 17.5 developer beta

Apple's second iOS 17.5 developer beta has fixed a bug that showed the Palestinian flag in the predictive text system when users typed in "Jerusalem."

author image

Apple rolls out second beta round, including EU Web Distribution in iOS 17.5

Apple has shifted onto the second round of developer betas, with the latest iteration of iOS 17.5 including Apple's Web Distribution system.

article thumbnail

Second developer beta of visionOS 1.2 has arrived

Owners of the Apple Vision Pro can now test out the second developer build of the visionOS 1.2 operating system.

article thumbnail

The new iOS 17.5 beta introduces app sideloading from websites in the EU and Apple has announced both what eligible developers have to do, and what users can expect to see.

author image

Apple will reportedly update its entire Mac line to the M4 processor, beginning in late 2024 and concluding with the Mac Pro in the second half of 2025.

article thumbnail

Today only: pick up a lifetime Babbel subscription for $149, a discount of $450 off retail

This $450 cash discount on a lifetime Babbel language learning subscription is back for a limited time only, giving you access to the All Languages plan for $149.97.

author image

Future Apple Vision Pro could help the user get life-saving medical advice

Apple is developing technology for wearable devices like Apple Vision Pro that measure and monitor biometric data and location to provide tailored and potentially lifesaving advice.

author image

Latest Videos

article thumbnail

The best game controllers for iPhone, iPad, Mac, and Apple TV

article thumbnail

How to get the best video capture possible on iPhone 15 Pro with ProRes

Latest reviews.

article thumbnail

Ugreen DXP8800 Plus network attached storage review: Good hardware, beta software

article thumbnail

Espresso 17 Pro review: Magnetic & modular portable Mac monitor

article thumbnail

Journey Loc8 MagSafe Finder Wallet review: an all-in-one Find My wallet

article thumbnail

{{ title }}

{{ summary }}

author image

WebM video format

Multimedia format designed to provide a royalty-free, high-quality open video compression format for use with HTML5 video. WebM supports the video codec VP8 and VP9.

WebP image format

Image format (based on the VP8 video format) that supports lossy and lossless compression, as well as animation and alpha transparency. WebP generally has better compression than JPEG, PNG and GIF and is designed to supersede them. [AVIF](/avif) and [JPEG XL](/jpegxl) are designed to supersede WebP.

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

webrtcHacks

[10 years of] guides and information for WebRTC developers

Guide apple , code , getUserMedia , ios , Safari Chad Phillips · September 7, 2018

Guide to WebRTC with Safari in the Wild (Chad Phillips)

It has been more than a year since Apple first added WebRTC support to Safari. My original post reviewing the implementation continues to be popular here, but it does not reflect some of the updates since the first limited release. More importantly, given its differences and limitations, many questions still remained on how to best develop WebRTC applications for Safari.

I ran into Chad Phillips at Cluecon  (again) this year and we ended up talking about his arduous experience making WebRTC work on Safari. He had a great, recent list of tips and tricks so I asked him to share it here.

Chad is a long-time open source guy and contributor to the FreeSWITCH product. He has been involved with WebRTC development since 2015. He recently launched  MoxieMeet , a videoconferencing platform for online experiential events, where he is CTO and developed a lot of the insights for this post.

{“editor”, “ chad hart “}

safari vp8 support

In June of 2017, Apple became the last major vendor to release support for WebRTC, paving the (still bumpy) road for platform interoperability.

And yet, more than a year later, I continue to be surprised by the lack of guidance available for developers to integrate their WebRTC apps with Safari/iOS. Outside of a couple posts by the Webkit team, some scattered StackOverflow questions, the knowledge to be gleaned from scouring the Webkit bug reports for WebRTC, and a few posts on this very website , I really haven’t seen much support available. This post is an attempt to begin rectifying the gap.

I have spent many months of hard work integrating WebRTC in Safari for a very complex videoconferencing application. Most of my time was spent getting iOS working, although some of the below pointers also apply to Safari on MacOS.

This post assumes you have some level of experience with implementing WebRTC — it’s not meant to be a beginner’s how to, but a guide for experienced developers to smooth the process of integrating their apps with Safari/iOS. Where appropriate I’ll point to related issues filed in the Webkit bug tracker so that you may add your voice to those discussions, as well as some other informative posts.

I did an awful lot of bushwacking in order to claim iOS support in my app, hopefully the knowledge below will make a smoother journey for you!

Some good news first

First, the good news:

  • Apple’s current implementation is fairly solid
  • For something simple like a 1-1 audio/video call, the integration is quite easy

Let’s have a look at some requirements and trouble areas.

General Guidelines and Annoyances

Use the current webrtc spec.

mozilla docs

If you’re building your application from scratch, I recommend using the current WebRTC API spec (it’s undergone several iterations). The following resources are great in this regard:

  • https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API
  • https://github.com/webrtc/samples

For those of you running apps with older WebRTC implementations, I’d recommend you upgrade to the latest spec if you can, as the next release of iOS  disables the legacy APIs by default. In particular, it’s best to avoid the legacy addStream APIs, which make it more difficult to manipulate tracks in a stream.

More background on this here: https://blog.mozilla.org/webrtc/the-evolution-of-webrtc/

iPhone and iPad have unique rules – test both

camera not working

Since the iPhone and iPad have different rules and limitations, particularly around video, I’d strongly recommend that you test your app on both devices. It’s probably smarter to start by getting it working fully on the iPhone, which seems to have more limitations than the iPad.

More background on this here: https://webkit.org/blog/6784/new-video-policies-for-ios

Let the iOS madness begin

It’s possible that may be all you need to get your app working on iOS. If not, now comes the bad news: the iOS implementation has some rather maddening bugs/restrictions, especially in more complex scenarios like multiparty conference calls.

Other browsers on iOS missing WebRTC integration

must use safari

The WebRTC APIs have not yet been exposed to iOS browsers using WKWebView  . In practice, this means that your web-based WebRTC application will only work in Safari on iOS, and not in any other browser the user may have installed (Chrome, for example), nor in an ‘in-app’ version of Safari.

To avoid user confusion, you’ll probably want to include some helpful user error message if they try to open your app in another browser/environment besides Safari proper.

Related issues:

  • https://bugs.webkit.org/show_bug.cgi?id=183201
  • https://bugs.chromium.org/p/chromium/issues/detail?id=752458

No beforeunload event, use pagehide

According to this Safari event documentation , the unload event has been deprecated, and the beforeunload   event has been completely removed in Safari. So if you’re using these events, for example, to handle call cleanup, you’ll want to refactor your code to use the pagehide   event on Safari instead.

source:  https://gist.github.com/thehunmonkgroup/6bee8941a49b86be31a787fe8f4b8cfe

Getting & playing media, playsinline attribute.

Step one is to add the required playsinline   attribute to your video tags, which allows the video to start playing on iOS. So this:

Becomes this:

playsinline   was originally only a requirement for Safari on iOS, but now you might need to use it in some cases in Chrome too – see Dag-Inge’s post  for more on that..

See the thread here for details on this issue requirement: https://github.com/webrtc/samples/issues/929

Autoplay rules

Next you’ll need to be aware of the Webkit WebRTC rules on autoplaying audio/video. The main rules are:

  • MediaStream-backed media will autoplay if the web page is already capturing.
  • MediaStream-backed media will autoplay if the web page is already playing audio
  • A user gesture is required to initiate any audio playback – WebRTC or otherwise.

This is good news for the common use case of a video call, since you’ve most likely already gotten permission from the user to use their microphone/camera, which satisfies the first rule. Note that these rules work alongside the base autoplay rules for MacOS and iOS, so it’s good to be aware of them as well.

Related webkit posts:

  • https://webkit.org/blog/7763/a-closer-look-into-webrtc
  • https://webkit.org/blog/7734/auto-play-policy-changes-for-macos
  • https://webkit.org/blog/6784/new-video-policies-for-ios

No low/limited video resolutions

no low res cropped

UPDATE 2019-08-18:

Unfortunately this bug has only gotten worse in  iOS 12, as their attempt to fix it broke the sending of video to peer connections for non-standard resolutions. On the positive side the issue does seem to be fully fixed in the latest iOS 13 Beta: https://bugs.webkit.org/show_bug.cgi?id=195868

Visiting https://jsfiddle.net/thehunmonkgroup/kmgebrfz/15/   (or the webrtcHack’s WebRTC-Camera-Resolution project) in a WebRTC-compatible browser will give you a quick analysis of common resolutions that are supported by the tested device/browser combination. You’ll notice that in Safari on both MacOS and iOS, there aren’t any available low video resolutions such as the industry standard QQVGA, or 160×120 pixels. These small resolutions are pretty useful for serving thumbnail-sized videos — think of the filmstrip of users in a Google Hangouts call, for example.

Now you could just send whatever the lowest available native resolution is along the peer connection and let the receiver’s browser downscale the video, but you’ll run the risk of saturating the download bandwidth for users that have less speedy internet in mesh/SFU scenarios.

I’ve worked around this issue by restricting the bitrate of the sent video, which is a fairly quick and dirty compromise. Another solution that would take a bit more work is to handle downscaling the video stream in your app before passing it to the peer connection, although that will result in the client’s device spending some CPU cycles.

Example code:

  • https://webrtc.github.io/samples/src/content/peerconnection/bandwidth/

New getUserMedia() request kills existing stream track

gUM bug

If your application grabs media streams from multiple getUserMedia ( )   requests, you are likely in for problems with iOS. From my testing, the issue can be summarized as follows: if getUserMedia ( )   requests a media type requested in a previous getUserMedia ( )  , the previously requested media track’s muted   property is set to true, and there is no way to programmatically unmute it. Data will still be sent along a peer connection, but it’s not of much use to the other party with the track muted! This limitation is currently expected behavior on iOS.

I was able to successfully work around it by:

  • Grabbing a global audio/video stream early on in my application’s lifecycle
  • Using MediaStream . clone ( )  ,  MediaStream . addTrack ( )  , MediaStream . removeTrack ( )   to create/manipulate additional streams from the global stream without calling getUserMedia ( )   again.

source:  https://gist.github.com/thehunmonkgroup/2c3be48a751f6b306f473d14eaa796a0

See this post for more: https://developer.mozilla.org/en-US/docs/Web/API/MediaStream  and

this related issue: https://bugs.webkit.org/show_bug.cgi?id=179363

Managing Media Devices

Media device ids change on page reload.

This has been improved as of iOS 12.2, where device IDs are now stable across browsing sessions after getUserMedia ( )   has been called once. However, device IDs are still not preserved across browser sessions, so this improvement isn’t really helpful for storing a user’s device preferences longer term. For more info, see https://webkit.org/blog/8672/on-the-road-to-webrtc-1-0-including-vp8/

Many applications include support for user selection of audio/video devices. This eventually boils down to passing the deviceId to getUserMedia ( )   as a constraint.

Unfortunately for you as a developer, as part of Webkit’s security protocols, random deviceId ’s are generated for all devices on each new page load. This means, unlike every other platform, you can’t simply stuff the user’s selected deviceId into persistent storage for future reuse.

The cleanest workaround I’ve found for this issue is:

  • Store both device . deviceId   and device . label   for the device the user selects
  • Try using the saved deviceId
  • If that fails, enumerate the devices again, and try looking up the deviceId   from the saved device label.

On a related note: Webkit further prevents fingerprinting by only exposing a user’s actual available devices after the user has granted device access. In practice, this means you need to make a getUserMedia ( )   call before  you call enumerateDevices ( )  .

source:  https://gist.github.com/thehunmonkgroup/197983bc111677c496bbcc502daeec56

Related issue: https://bugs.webkit.org/show_bug.cgi?id=179220

Related post: https://webkit.org/blog/7763/a-closer-look-into-webrtc

Speaker selection not supported

Webkit does not yet support HTMLMediaElement . setSinkId ( )  , which is the API method used for assigning audio output to a specific device. If your application includes support for this, you’ll need to make sure it can handle cases where the underlying API support is missing.

source:  https://gist.github.com/thehunmonkgroup/1e687259167e3a48a55cd0f3260deb70

Related issue: https://bugs.webkit.org/show_bug.cgi?id=179415

PeerConnections & Calling

Beware, no vp8 support.

Support for VP8 has now been added as of iOS 12.2. See https://webkit.org/blog/8672/on-the-road-to-webrtc-1-0-including-vp8/

While the W3C spec clearly states that support for the VP8 video codec (along with the H.264 codec) is to be implemented, Apple has thus far chosen to not support it. Sadly, this is anything but a technical issue, as libwebrtc includes VP8 support, and Webkit actively disables  it.

So at this time, my advice to achieve the best interoperability in various scenarios is:

  • Multiparty MCU – make sure that H.264 is a supported codec
  • Multiparty SFU – use H.264
  • Multiparty Mesh and peer to peer – pray everyone can negotiate a common codec

I say best interop because while this gets you a long way, it won’t be all the way. For example, Chrome for Android does not support software H.264 encoding yet. In my testing, many (but not all) Android phones have hardware H.264 encoding, but those that are missing hardware encoding will not work in Chrome for Android.

Associated bug reports:

  • https://bugs.webkit.org/show_bug.cgi?id=167257
  • https://bugs.webkit.org/show_bug.cgi?id=173141
  • https://bugs.chromium.org/p/chromium/issues/detail?id=719023

Send/receive only streams

As previously mentioned, iOS doesn’t support the legacy WebRTC APIs. However, not all browser implementations fully support the current specification either.

As of this writing, a good example is creating a send only audio/video peer connection. iOS doesn’t support the legacy RTCPeerConnection . createOffer ( )   options of offerToReceiveAudio  / offerToReceiveVideo  , and the current stable Chrome doesn’t support the RTCRtpTransceiver   spec by default.

Other more esoteric bugs and limitations

There are certainly other corner cases you can hit that seem a bit out of scope for this post. However, an excellent resource should you run aground is the Webkit issue queue, which you can filter just for WebRTC-related issues: https://bugs.webkit.org/buglist.cgi?component=WebRTC&list_id=4034671&product=WebKit&resolution=—

Remember, Webkit/Apple’s implementation is young

It’s still missing some features (like the speaker selection mentioned above), and in my testing isn’t as stable as the more mature implementation in Google Chrome.

There have also been some major bugs — capturing audio was completely broken for the majority of the iOS 12 Beta release cycle (thankfully they finally fixed that in Beta 8).

Apple’s long-term commitment to WebRTC as a platform isn’t clear, particularly because they haven’t released much information about it beyond basic support. As an example, the previously mentioned lack of VP8 support is troubling with respect to their intention to honor the agreed upon W3C specifications.

These are things worth thinking about when considering a browser-native implementation versus a native app. For now, I’m cautiously optimistic, and hopeful that their support of WebRTC will continue, and extend into other non-Safari browsers on iOS.

{“author”: “ Chad Phillips “}

Related Posts

Put in a Bug in Apple’s Apple – Alex Gouaillard’s Plan

Reader Interactions

safari vp8 support

September 7, 2018 at 9:42 am

One of the most detailed posts I’ve seen on the subject; thank you Chad, for sharing.

safari vp8 support

September 11, 2018 at 7:04 am

Please also note that Safari does not support data channels.

safari vp8 support

September 11, 2018 at 12:40 pm

@JSmitty, all of the ‘RTCDataChannel’ examples at https://webrtc.github.io/samples/ do work in Safari on MacOS, but do not currently work in Safari on iOS 11/12. I’ve filed https://bugs.webkit.org/show_bug.cgi?id=189503 and https://github.com/webrtc/samples/issues/1123 — would like to get some feedback on those before I incorporate this info into the post. Thanks for the heads up!

safari vp8 support

September 26, 2018 at 2:44 pm

OK, so I’ve confirmed data channels DO work in Safari on iOS, but there’s a caveat: iOS does not include local ICE candidates by default, and many of the data channel examples I’ve seen depend on that, as they’re merely sending data between two peer connections on the same device.

See https://bugs.webkit.org/show_bug.cgi?id=189503#c2 for how to temporarily enable local ICE on iOS.

safari vp8 support

January 22, 2020 at 4:21 pm

Great article. Thanks Chad & Chad for sharing your expertise.

As to DataChannel support. Looks like Safari officially still doesn’t support it according to the support matrix. https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel

My own testing shows that DataChannel works between two Safari browser windows. However at this time (Jan 2020) it does not work between Chrome and Safari windows. Also fails between Safari and aiortc (Python WebRTC provider). DataChannel works fine between Chrome and aiortc.

A quick way to test this problem is via sharedrop.io Transferring files works fine between same brand browser windows, but not across brands.

Hope Apple is working on the compatibility issues with Chrome.

safari vp8 support

September 13, 2018 at 2:37 pm

Nice summary Chad. Thanks for this! –

safari vp8 support

September 18, 2018 at 4:29 pm

Very good post, Chad. Just what I was looking for. Thanks for sharing this knowledge. 🙂

safari vp8 support

October 4, 2018 at 10:11 am

Thanks for this Chad, currently struggling with this myself, where a portable ‘web’ app is being written.. I’m hopeful it will creep into wkwebview soon!

safari vp8 support

October 5, 2018 at 2:43 am

Thanks for detailing the issues.

One suggestion for any future article would be including the iOS Safari limitation on simultaneous playing of multiple elements with audio present.

This means refactoring so that multiple (remote) audio sources are rendered by a single element.

October 5, 2018 at 9:46 am

There’s a good bit of detail/discussion about this limitation here: https://bugs.webkit.org/show_bug.cgi?id=176282

Media servers that mix the audio are a good solution.

safari vp8 support

December 18, 2018 at 1:10 pm

The same issue I’m facing New getUserMedia() request kills existing stream track. Let’s see whether it helps me or not.

December 19, 2018 at 6:23 am

iOS calling getUserMedia() again kills video display of first getUserMedia(). This is the issue I’m facing but I want to pass the stream from one peer to another peer.

safari vp8 support

April 26, 2019 at 12:07 am

Thank you Chad for sharing this, I was struggling with the resolution issue on iOS and I was not sure why I was not getting the full hd streaming. Hope this will get supported soon.

safari vp8 support

May 21, 2019 at 12:54 am

VP8 is a nightmare. I work on a platform where we publish user-generated content, including video, and the lack of support for VP8 forces us to do expensive transcoding on these videos. I wonder why won’t vendors just settle on a universal codec for mobile video.

August 18, 2019 at 2:17 pm

VP8 is supported as of iOS 12.2: https://webkit.org/blog/8672/on-the-road-to-webrtc-1-0-including-vp8/

safari vp8 support

July 3, 2019 at 3:38 am

Great Post! Chad I am facing an issue with iOS Safari, The issue is listed below. I am using KMS lib for room server handling and calling, There wasn’t any support for Safari / iOS safari in it, I added adapter.js (shim) to make my application run on Safari and iOS (Safari). After adding it worked perfectly on Safari and iOS, but when more than 2 persons join the call, The last added remote stream works fine but the existing remote stream(s) get struck/disconnected which means only peer to peer call works fine but not multiple remote streams. Can you please guide how to handle multiple remote streams in iOS (Safari). Thanks

July 3, 2019 at 1:45 pm

Your best bet is probably to search the webkit bugtracker, and/or post a bug there.

safari vp8 support

August 7, 2019 at 6:40 pm

No low/limited video resolutions: 1920×1080 not supported -> are you talking about IOS12 ? Because I’m doing 4K on IOS 12.3.1 with janus echo test with iphone XS Max (only one with 4K front cam) Of course if I run your script on my MBP it will say fullHD not supported -> because the cam is only 720p.

August 18, 2019 at 2:20 pm

That may be a standard camera resolution on that particular iPhone. The larger issue has been that only resolutions natively supported by the camera have been available, leading to difficultly in reliably selecting resolutions in apps, especially lower resolutions like those used in thumbnails.

Thankfully, this appears to be fully addressed in the latest beta of iOS 13.

safari vp8 support

April 18, 2020 at 7:01 am

How many days of work I lost before find this article. It’s amazing and explain a lot the reasons of all the strange bugs in iOS. Thank you so much.

safari vp8 support

September 21, 2020 at 11:38 am

Hi, i’m having issues with Safari on iOS. In the video tag, adding autoplay and playsinline doesn’t work on our Webrtc implementation. Obviously it works fine in any browser on any other platform.

I need to add the controls tag, then manually go to full screen and press play.

Is there a way to play the video inside the web page ?

safari vp8 support

December 9, 2020 at 2:35 am

First of all, thanks for detailing the issues.

This article is unique to provide many insides for WebRTC/Safari related issues. I learned a lot and applied some the techniques in our production application.

But I had very unique case which I am struggling with right now, as you might guess with Safari. I would be very grateful if you can help me or at least to guide to the right direction.

We have webrtc-based one-2-one video chat, one side always mobile app (host) who is the initiator and the other side is always browser both desktop and mobile. Making the app working across different networks was pain in the neck up to recently, but managed to fix this by changing some configurations. So the issue was in different networks WebRTC was not generating relay and most of the time server reflexive candidates, as you know without at least stun provided candidates parties cannot establish any connection. Solution was simple as though it look a lot of search on google, ( https://github.com/pion/webrtc/issues/810 ), we found out that mobile data providers mostly assigning IPv6 to mobile users. And when they used mobile data plan instead of local wifi, they could not connect to each other. By the way, we are using cloud provider for STUN/TURN servers (Xirsys). And when we asked their technical support team they said their servers should handle IPv6 based requests, but in practice it did not work. So we updated RTCPeerConnection configurations, namely, added optional constraints (and this optional constraints are also not provided officially, found them from other non official sources), the change was just disabling IPv6 on both mobile app (iOS and Android) and browser. After this change, it just worked perfectly until we found out Safari was not working at all. So we reverted back for Safari and disabled IPv6 for other cases (chrome, firefox, Android browsers)

const iceServers = [ { urls: “stun:” }, { urls: [“turn:”,”turn:”,… ], credential: “secret”, username: “secret” } ];

let RTCConfig; // just dirty browser detection const ua = navigator.userAgent.toLocaleLowerCase(); const isSafari = ua.includes(“safari”) && !ua.includes(“chrome”);

if (isSafari) { RTCConfig = iceServers; } else { RTCConfig = { iceServers, constraints: { optional: [{ googIPv6: false }] } }; }

if I wrap iceServers array inside envelop object and optional constraints and use it in new RTCPeerConnection(RTCConfig); is is throwing error saying: Attempted to assign readonly property pointing into => safari_shim.js : 255

Can you please help with this issue, our main customers use iPhone, so making our app work in Safari across different networks are very critical to our business. If you provide some kind of paid consultation, it is also ok for us

Looking forward to hearing from you

safari vp8 support

July 13, 2022 at 2:42 pm

Thanks for the great summary regarding Safari/IOS. The work-around for low-bandwidth issue is very interesting. I played with the sample. It worked as expected. It’s played on the same device, isn’t it? When I tried to add a similar “a=AS:500\r\n” to the sdp and tested it on different devices – one being windows laptop with browser: Chrome, , another an ipad with browser: Safari – it seemed not working. The symptom was: the stream was not received or sent. In a word, the connections for media communications was not there. I checked the sdp, it’s like,

sdp”: { “type”: “offer”, “sdp”: ” v=0\r\n o=- 3369656808988177967 2 IN IP4 127.0.0.1\r\n s=-\r\n t=0 0\r\n a=group:BUNDLE 0 1 2\r\n a=extmap-allow-mixed\r\n a=msid-semantic: WMS 7BLOSVujr811EZHSiFZI2t8yMML8LpOgo0in\r\n m=audio 9 UDP/TLS/RTP/SAVPF 111 63 103 104 9 0 8 106 105 13 110 112 113 126\r\n c=IN IP4 0.0.0.0\r\n b=AS:500\r\n … }

Also I didn’t quite understand the statement in the article. “I’ve worked around this issue by restricting the bitrate of the sent video, which is a fairly quick and dirty compromise. Another solution that would take a bit more work is to handle downscaling the video stream in your app before passing it to the peer connection” – don’t the both scenarios work on the sent side?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Developing for Safari 11

We recommend migrating your application to the API provided by our preferred video partner, Zoom. We've prepared this migration guide (link takes you to an external page) to assist you in minimizing any service disruption.

Version Compatibility

Safari, from version 11, on macOS and iOS is supported in twilio-video.js 1.2.1 and greater. Earlier versions of Safari are not compatible with twilio-video.js because they do not support WebRTC.

Safari, from version 12.1, includes support for VP8 and VP8 simulcast. twilio-video.js 1.2.1 will automatically offer VP8 when supported by Safari. However, if you are looking at adding VP8 simulcast on Safari 12.1+, twilio-video.js 1.17.0 or higher is required.

The rest of this document discusses best practices for Safari < 12.1 as those versions do not include VP8.

Codec Selection

Safari supports only the h.264 codec.

Programmable Video uses WebRTC, a standard set of browser APIs for real-time audio and video in the browser. Chrome, Firefox, Edge, and Safari web browsers all support WebRTC APIs, but each has its own nuances.

When it comes to video encoding, Chrome, Edge, and Firefox support two video codecs: VP8 and H.264. Safari only supports H.264 today. This means that other browsers and mobile apps must send H.264-encoded video if they want Safari users to see the video tracks they share.

Our goal is to make it so that you don't need to worry about codec selection, but there are a few things you'll want to know as you plan support for Safari < 12.1 in your application.

H.264 in Peer-to-Peer Rooms

If your app uses Peer-to-Peer Rooms, the codec selection should be seamless: Chrome and Firefox both support H.264, and will automatically send and receive H.264 video tracks to any Safari < 12.1 users who join the Room.

If you ship a native mobile version of your app and use Peer-to-Peer Rooms, you'll need to use version 2.0.0-preview1 of our iOS or Android SDK to send and receive H.264 in a Peer-to-Peer Room. Earlier versions of our native SDKs will not be able to send or receive video to Safari < 12.1 devices, because they do not support H.264.

You can use the codec preferences API to force the browser to use a specific video codec.

H.264 in Group Rooms

If your app uses Group Rooms, you'll need to make a decision about how you want to support Safari < 12.1.

When this option is set, all endpoints must use H.264 codec when participating in the Room. See the Rooms REST API for more information.

Interoperability Between Safari < 12.1 and iOS and Android Native Apps

We've added support for H.264 codec in our Android and iOS SDKs starting at version 2.0.0-preview1. If you ship a native mobile version of your app, and you want it to be able to talk to Safari, you'll need to update to 2.0.0-preview1 or higher.

Keep in mind that if you create Rooms with only H.264 support as described above, apps running earlier versions of our mobile SDKs will not be able to connect.

Other Safari < 12.1 Considerations

Safari < 12.1 can only capture audio and video from one tab at a time.

Keep this in mind especially while you're developing your app and testing locally--you'll need to use a second browser if you want to test bi-directional video on your local machine.

Safari < 12.1 will not allow you to capture audio and video on insecure sites

Your site must be served over HTTPS in order to access the user's microphone and camera. This can make development difficult, so see the tips below for details.

Tips for Developing on Safari < 12.1

If you're building a video app for Safari users, we recommend downloading the Safari Technology Preview (link takes you to an external page) . The Technology Preview release has some additional options that make development and debugging a bit easier-you can find the options under the menu Develop > WebRTC.

A couple of useful options:

  • Enabling Media Capture on Insecure Sites lets you capture audio and video from the microphone and camera without using HTTPS.
  • Use Mock Capture Devices simulates audio and video input in the browser, which can come in handy for troubleshooting or automated testing.

Read more about developing WebRTC applications for Safari < 12.1 on the WebKit blog (link takes you to an external page) .

[SOLVED] Play .webm video on safari

Does anyone know how to play .webm videos on Safari

Normally, for video textures, I use mp4 files. There are dozens of converters on the net, so it shouldn’t be too tough.

But i need to play transparent video. Webm supports transparent video

Safari hasn’t fully implemented Webm support yet, so not much you can do until Apple adds that in place. There used to be a number of plug-ins for desktop Safari that enabled Webm playback but support for plug-ins has been dropped.

For the moment if you would like to play a transparent video in Playcanvas you could use some form of chromakey shader with a green-screen mp4 video:

As you mentioned the quality is poor that’s why i moved to transparent video.Is there any other way of playing .webm videos in IOS safari or Chrome. It works fine in android

https://caniuse.com/#search=webm

Partial Support: Only supports VP8 used in WebRTC.

Chrome and the other browsers on iOS still use the webview of Safari for rendering (Apple security rule). So basically they can’t have a different feature set than what Safari provides.

Indeed there is partial support of the Google VP8 codec in WebRTC, as Will said, on iOS though I have no idea if that can be used to decode a file, most likely it used only to compress the camera stream among peers.

Thank you so much Leonidas and will.After surfing some blogs i came to an end i can’t do it.Let me do it for android only

Agora Releases VP9 Video Support for Safari featured

Agora Releases VP9 Video Support for Safari

Agora is excited to be the first real-time video platform-as a-service (PaaS) provider to release full support for VP9 in browsers including Safari. Full VP9 support comes with the release of Web SDK 14.9.2. VP9 provides twice the quality of the VP8 codec for the same bitrate. In my 12 years of WebRTC expertise this is a truly impressive milestone for the industry.

What is VP9 and why is it important?

VP9 is a video coding format developed by Google. VP9 is the successor to VP8, which is currently the default for real-time video on the web. VP9 enables twice the compression of VP8 and is customized for video resolutions greater than 1080p (such as UHD). 

VP9 is a significant advancement in the world of video codecs for several reasons: 

  • Improved Compression: VP9 provides better compression than its predecessor (VP8) and is often compared favorably to HEVC/H.265 in terms of compression efficiency. Better compression means smaller file sizes without compromising video quality, which leads to faster streaming and reduced bandwidth usage.
  • Adaptive Streaming: VP9 is well-suited for adaptive bitrate streaming, such as YouTube’s Dynamic Adaptive Streaming over HTTP (DASH). This adaptability ensures that videos stream smoothly across various network conditions.
  • Support for 4K and Beyond: VP9 is designed to handle high-resolution video, making it an excellent choice for 4K streaming and even resolutions beyond that.
  • Broad Adoption: Major platforms, like YouTube, have adopted VP9 due to its efficiency, leading to a significant portion of internet traffic being encoded in VP9.
  • Power Efficiency: For mobile devices and other battery-powered gadgets, VP9 is designed to decode efficiently, conserving power and extending battery life.
  • Web Integration: Being a product from Google, VP9 has robust support in browsers, particularly in Chrome. This integration is essential for the web, especially for platforms that rely heavily on video content. 

In summary, VP9 is a big deal because it represents a combination of cost-saving (due to being royalty-free), technological advancement (with improved compression and resolution support), and widespread industry adoption, all of which benefits both content creators and consumers.

Challenges with VP9 on Safari

While VP9 has been around since 2016, it has presented challenges with a lack of hardware support and Safari browser support. To this day, Safari does not provide network and compute adaptation other than to reduce the outgoing frame-rate and bit-rate. This means that a video stream coming from an iOS device will not look good on the receiving end when the bitrate drops or the device isn’t capable of encoding 720p video at 30fps.

Comparing VP9 to VP8

This comparison shows video coming from two identical iPhone 11s connected to the same Wi-Fi access point in a high packet loss office environment.  On the left is VP8 and on the right is VP9. The bit-rate is limited to 1mbps on each device and the difference in quality is impressive.

Agora now offers full VP9 support for Safari

Agora has now solved the issues with VP9 in the Safari browser by providing dynamic resolution scaling on Safari 16.0 and later versions for desktop and mobile to adapt to the needs of a low-compute device or limited uplink. This allows the VP9 picture in Safari to remain stable with a high frame rate and no blockiness as seen in Google Chrome and other browsers.

VP9 is the latest innovation from Agora that ensures video quality is maintained in all environments. Real-time video quality is vital for ensuring a seamless user experience, fostering clear and effective communication, and reflecting professionalism. In contexts ranging from business meetings to telemedicine, high-quality video is essential for accuracy, engagement, and credibility.  

As technology advances, users increasingly expect superior video quality as a standard. It’s essential to  provide a seamless user experience that fosters clear and effective communication and reduces visual strain during prolonged usage. To find out more of Agora’s other innovations in real-time video quality, space:  Revolutionizing Live Video Quality: Agora Unveils Next-Gen Enhancements .

Blog Cta

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Safari 15.0 VP9 support

I am using macOS Catalina due to design preferences. But Safari has updated to version 15.0, which claims support for the VP9 codec, which allows you to watch 4K content on YouTube. For some reason, I do not have such an opportunity - the maximum available resolution is 1080p. This is weird as my friend uses an iMac on Catalina and has 4K. My device is a MacBook Pro 16" late 2019.

I would appreciate your reply.

MacBook Pro 16″, macOS 10.15

Posted on Oct 18, 2021 1:18 PM

Loading page content

Page content loaded

Ronasara

Oct 18, 2021 4:38 PM in response to Mansurius

There have recently been many reports of a broad range of new problems on this support site about the latest release of Safari v15. It is included in the downloads for Big Sur and Catalina. If you are experiencing these problems, you can use another browser such as Firefox or Chrome and they will work for you until a newer version of Safari is released. While some of the same issues keep arising, they are not consistent and some computers (like mine) seem to experience no problems.

  • Skip to main content

UDN Web Docs: MDN Backup

  • Web video codec guide

Due to the sheer size of uncompressed video data, it's necessary to compress it significantly in order to store it, let alone transmit it over a network. Imagine the amount of data needed to store uncompressed video:

  • A single frame of high definition (1920x1080) video in full color (4 bytes per pixel) is 8,294,400 bytes.
  • At a typical 30 frames per second, each second of HD video would occupy 248,832,000 bytes (~249 MB).
  • A minute of HD video would need 14.93 GB of storage.
  • A fairly typical 30 minute video conference would need about 447.9 GB of storage, and a 2-hour movie would take almost 1.79 TB (that is, 1790 GB) .

Not only is the required storage space enormous, but the network bandwidth needed to transmit an uncompressed video like that would be enormous, at 249 MB/sec—not including audio and overhead. This is where video codecs come in. Just as audio codecs do for the sound data, video codecs compress the video data and encode it into a format that can later be decoded and played back or edited.

Most video codecs are lossy , in that the decoded video does not precisely match the source. Some details may be lost; the amount of loss depends on the codec and how it's configured, but as a general rule, the more compression you achieve, the more loss of detail and fidelity will occur. Some lossless codecs do exist, but they are typically used for archival and storage for local playback rather than for use on a network.

This guide introduces the video codecs you're most likely to encounter or consider using on the web, summaries of their capabilities and any compatibility and utility concerns, and advice to help you choose the right codec for your project's video.

Common codecs

The following video codecs are those which are most commonly used on the web. For each codec, the containers (file types) that can support them are also listed. Each codec provides a link to a section below which offers additional details about the codec, including specific capabilities and compatibility issues you may need to be aware of.

Factors affecting the encoded video

As is the case with any encoder, there are two basic groups of factors affecting the size and quality of the encoded video: specifics about the source video's format and contents, and the characteristics and configuration of the codec used while encoding the video.

The simplest guideline is this: anything that makes the encoded video look more like the original, uncompressed, video will generally make the resulting data larger as well. Thus, it's always a tradeoff of size versus quality. In some situations, a greater sacrifice of quality in order to bring down the data size is worth that lost quality; other times, the loss of quality is unacceptable and it's necessary to accept a codec configuration that results in a correspondingly larger file.

Effect of source video format on encoded output

The degree to which the format of the source video will affect the output varies depending on the codec and how it works. If the codec converts the media into an internal pixel format, or otherwise represents the image using a means other than simple pixels, the format of the original image doesn't make any difference. However, things such as frame rate and, obviously, resolution will always have an impact on the output size of the media.

Additionally, all codecs have their strengths and weaknesses. Some have trouble with specific kinds of shapes and patterns, or aren't good at replicating sharp edges, or tend to lose detail in dark areas, or any number of possibilities. It all depends on the underlying algorithms and mathematics.

The degree to which these affect the resulting encoded video will vary depending on the precise details of the situation, including which encoder you use and how it's configured. In addition to general codec options, the encoder could be configured to reduce the frame rate, to clean up noise, and/or to reduce the overall resolution of the video during encoding.

Effect of codec configuration on encoded output

The algorithms used do encode video typically use one or more of a number of general techniques to perform their encoding. Generally speaking, any configuration option that is intended to reduce the output size of the video will probably have a negative impact on the overall quality of the video, or will introduce certain types of artifacts into the video. It's also possible to select a lossless form of encoding, which will result in a much larger encoded file but with perfect reproduction of the original video upon decoding.

In addition, each encoder utility may have variations in how they process the source video, resulting in differences in the output quality and/or size.

The options available when encoding video, and the values to be assigned to those options, will vary not only from one codec to another but depending on the encoding software you use. The documentation included with your encoding software will help you to understand the specific impact of these options on the encoded video.

Compression artifacts

Artifacts are side effects of a lossy encoding process in which the lost or rearranged data results in visibly negative effects. Once an artifact has appeared, it may linger for a while, because of how video is displayed. Each frame of video is presented by applying a set of changes to the currently-visible frame. This means that any errors or artifacts will compound over time, resulting in glitches or otherwise strange or unexpected deviations in the image that linger for a time.

To resolve this, and to improve seek time through the video data, periodic key frames (also known as intra-frames or i-frames ) are placed into the video file. The key frames are full frames, which are used to repair any damage or artifact residue that's currently visible.

Aliasing is a general term for anything that upon being reconstructed from the encoded data does not look the same as it did before compression. There are many forms of aliasing; the most common ones you may see include:

Color edging

Color edging is a type of visual artifact that presents as spurious colors introduced along the edges of colored objects within the scene. These colors have no intentional color relationship to the contents of the frame.

Loss of sharpness

The act of removing data in the process of encoding video requires that some details be lost. If enough compression is applied, parts or potentially all of the image could lose sharpness, resulting in a slightly fuzzy or hazy appearance.

Lost sharpness can make text in the image difficult to read, as text—especially small text—is very detail-oriented content, where minor alterations can significantly impact legibility.

Lossy compression algorithms can introduce ringing , an effect where areas outside an object are contaminated with colored pixels generated by the compression algorithm. This happens when an algorithm that uses blocks that span across a sharp boundary between an object and its background. This is particularly common at higher compression levels.

Example of the ringing effect

Note the blue and pink fringes around the edges of the star above (as well as the stepping and other significant compression artifacts). Those fringes are the ringing effect. Ringing is similar in some respects to mosquito noise , except that while the ringing effect is more or less steady and unchanging, mosquito noise shimmers and moves.

RInging is another type of artifact that can make it particularly difficult to read text contained in your images.

Posterizing

Posterization occurs when the compression results in the loss of color detail in gradients. Instead of smooth transitions through the various colors in a region, the image becomes blocky, with blobs of color that approximate the original appearance of the image.

safari vp8 support

Note the blockiness of the colors in the plumage of the bald eagle in the photo above (and the snowy owl in the background). The details of the feathers is largely lost due to these posterization artifacts.

Contouring or color banding is a specific form of posterization in which the color blocks form bands or stripes in the image. This occurs when the video is encoded with too coarse a quantization configuration. As a result, the video's contents show a "layered" look, where instead of smooth gradients and transitions, the transitions from color to color are abrupt, causing strips of color to appear.

Example of an image whose compression has introduced contouring

In the example image above, note how the sky has bands of different shades of blue, instead of being a consistent gradient as the sky color changes toward the horizon. This is the contouring effect.

Mosquito noise

Mosquito noise is a temporal artifact which presents as noise or edge busyness that appears as a flickering haziness or shimmering that roughly follows outside the edges of objects with hard edges or sharp transitions between foreground objects and the background. The effect can be similar in appearance to ringing .

safari vp8 support

The photo above shows mosquito noise in a number of places, including in the sky surrounding the bridge. In the upper-right corner, an inset shows a close-up of a portion of the image that exhibits mosquito noise.

Mosquito noise artifacts are most commonly found in MPEG video, but can occur whenever a discrete cosine transform (DCT) algorithm is used; this includes, for example, JPEG still images.

Motion compensation block boundary artifacts

Compression of video generally works by comparing two frames and recording the differences between them, one frame after another, until the end of the video. This technique works well when the camera is fixed in place, or the objects in the frame are relatively stationary, but if there is a great deal of motion in the frame, the number of differences between frames can be so great that compression doesn't do any good.

Motion compensation is a technique that looks for motion (either of the camera or of objects in the frame of view) and determines how many pixels the moving object has moved in each direction. Then that shift is stored, along with a description of the pixels that have moved that can't be described just by that shift. In essence, the encoder finds the moving objects, then builds an internal frame of sorts that looks like the original but with all the objects translated to their new locations. In theory, this approximates the new frame's appearance. Then, to finish the job, the remaining differences are found, then the set of object shifts and the set of pixel differences are stored in the data representing the new frame. This object that describes the shift and the pixel differences is called a residual frame .

There are two general types of motion compensation: global motion compensation and block motion compensation . Global motion compensation generally adjusts for camera movements such as tracking, dolly movements, panning, tilting, rolling, and up and down movements. Block motion compensation handles localized changes, looking for smaller sections of the image that can be encoded using motion compensation. These blocks are normally of a fixed size, in a grid, but there are forms of motion compensation that allow for variable block sizes, and even for blocks to overlap.

There are, however, artifacts that can occur due to motion compensation. These occur along block borders, in the form of sharp edges that produce false ringing and other edge effects. These are due to the mathematics involved in the coding of the residual frames, and can be easily noticed before being repaired by the next key frame.

Reduced frame size

In certain situations, it may be useful to reduce the video's dimensions in order to improve the final size of the video file. While the immediate loss of size or smoothness of playback may be a negative factor, careful decision-making can result in a good end result. If a 1080p video is reduced to 720p prior to encoding, the resulting video can be much smaller while having much higher visual quality; even after scaling back up during playback, the result may be better than encoding the original video at full size and accepting the quality hit needed to meet your size requirements.

Reduced frame rate

Similarly, you can remove frames from the video entirely and decrease the frame rate to compensate. This has two benefits: it makes the overall video smaller, and that smaller size allows motion compensation to accomplish even more for you. For exmaple, instead of computing motion differences for two frames that are two pixels apart due to inter-frame motion, skipping every other frame could lead to computing a difference that comes out to four pixels of movement. This lets the overall movement of the camera be represented by fewer residual frames.

The absolute minimum frame rate that a video can be before its contents are no longer perceived as motion by the human eye is about 12 frames per second. Less than that, and the video becomes a series of still images. Motion picture film is typically 24 frames per second, while standard definition television is about 30 frames per second (slightly less, but close enough) and high definition television is between 24 and 60 frames per second. Anything from 24 FPS upward will generally be seen as satisfactorily smooth; 30 or 60 FPS is an ideal target, depending on your needs.

In the end, the decisions about what sacrifices you're able to make are entirely up to you and/or your design team.

Codec details

The AOMedia Video 1 ( AV1 ) codec is an open format designed by the Alliance for Open Media specifically for internet video. It achieves higher data compression rates than VP9 and H.265/HEVC , and as much as 50% higher rates than AVC . AV1 is fully royalty-free and is designed for use by both the <video> element and by WebRTC .

AV1 currently offers three profiles: main , high , and professional with increasing support for color depths and chroma subsampling. In addition, a series of levels are specified, each defining limits on a range of attributes of the video. These attributes include frame dimensions, image area in pixels, display and decode rates, average and maximum bit rates, and limits on the number of tiles and tile columns used in the encoding/decoding process.

For example, level AV1 level 2.0 offers a maximum frame width of 2048 pixels and a maximum height of 1152 pixels, but its maximum frame size in pixels is 147,456, so you can't actually have a 2048x1152 video at level 2.0. It's worth noting, however, that at least for Firefox and Chrome, the levels are actually ignored at this time when performing software decoding, and the decoder just does the best it can to play the video given the settings provided. For compatibility's sake going forward, however, you should stay within the limits of the level you choose.

The primary drawback to AV1 at this time is that it is very new, and support is still in the process of being integrated into most browsers. Additionally, encoders and decoders are still being optimized for performance, and hardware encoders and decoders are still mostly in development rather than production. For this reason, encoding a video into AV1 format takes a very long time, since all the work is done in software.

For the time being, because of these factors, AV1 is not yet ready to be your first choice of video codec, but you should watch for it to be ready to use in the future.

[1] ISO Base Media File Format

[2] See the AV1 specification's tables of levels , which describe the maximum resolutions and rates at each level.

AVC (H.264)

The MPEG-4 specification suite's Advanced Video Coding ( AVC ) standard is specified by the identical ITU H.264 specification and the MPEG-4 Part 10 specification. It's a motion compensation based codec that is widely used today for all sorts of media, including broadcast television, RTP videoconferencing, and as the video codec for Blu-Ray discs.

AVC is highly flexible, with a number of profiles with varying capabilities; for example, the Constrained Baseline Profile is designed for use in videoconferencing and mobile scenarios, using less bandwidth than the Main Profile (which is used for standard definition digital TV in some regions) or the High Profile (used for Blu-Ray Disc video). Most of the profiles use 8-bit color components and 4:2:0 chroma subsampling; The High 10 Profile adds support for 10-bit color, and advanced forms of High 10 add 4:2:2 and 4:4:4 chroma subsampling.

AVC also has special features such as support for multiple views of the same scene (Multiview Video Coding), which allows, among other things, the production of stereoscopic video.

AVC is a proprietary format, however, and numerous patents are owned by multiple parties regarding its technologies. Commercial use of AVC media requires a license, though the MPEG LA patent pool does not require license fees for streaming internet video in AVC format as long as the video is free for end users.

Non-web browser implementations of WebRTC (any implementation which doesn't include the JavaScript APIs) are required to support AVC as a codec in WebRTC calls. While web browsers are not required to do so, some do.

In HTML content for web browsers, AVC is broadly compatible and many platforms support hardware encoding and decoding of AVC media. However, be aware of its licensing requirements before choosing to use AVC in your project!

[1] Firefox support for AVC is dependent upon the operating system's built-in or preinstalled codecs for AVC and its container in order to avoid patent concerns.

ITU's H.263 codec was designed primarily for use in low-bandwidth situations. In particular, its focus is for video conferencing on PSTN (Public Switched Telephone Networks), RTSP , and SIP (IP-based videoconferencing) systems. Despite being optimized for low-bandwidth networks, it is fairly CPU intensive and may not perform adequately on lower-end computers. The data format is similar to that of MPEG-4 Part 2.

H.263 has never been widely used on the web. Variations on H.263 have been used as the basis for other proprietary formats, such as Flash video or the Sorenson codec. However, no major browser has ever included H.263 support by default. Certain media plugins have enabled support for H.263 media.

Unlike most codecs, H.263 defines fundamentals of an encoded video in terms of the maximum bit rate per frame (picture), or BPPmaxKb . During encoding, a value is selected for BPPmaxKb, and then the video cannot exceed this value for each frame. The final bit rate will depend on this, the frame rate, the compression, and the chosen resolution and block format.

H.263 has been superseded by H.264 and is therefore considered a legacy media format which you generally should avoid using if you can. The only real reason to use H.263 in new projects is if you require support on very old devices on which H.263 is your best choice.

H.263 is a proprietary format, with patents held by a number of organizations and companies, including Telenor, Fujitsu, Motorola, Samsung, Hitachi, Polycom, Qualcomm, and so on. To use H.263, you are legally obligated to obtain the appropriate licenses.

[1] While Firefox does not generally support H.263, the OpenMax platform implementation (used for the Boot to Gecko project upon which Firefox OS was based) did support H.263 in 3GP files.

[2] Version 1 of H.263 specifies a set of picture sizes which are supported. Later versions may support additional resolutions.

HEVC (H.265)

The High Efficiency Video Coding ( HVEC ) codec is defined by ITU's H.265 as well as by MPEG-H Part 2 (the still in-development follow-up to MPEG-4). HEVC was designed to support efficient encoding and decoding of video in sizes including very high resolutions (including 8K video), with a structure specifically designed to let software take advantage of modern processors. Theoretically, HEVC can achieve compressed file sizes half that of AVC but with comparable image quality.

For example, each coding tree unit (CTU)—similar to the macroblock used in previous codecs—consists of a tree of luma values for each sample as well as a tree of chroma values for each chroma sample used in the same coding tree unit, as well as any required syntax elements. This structure supports easy processing by multiple cores.

An interesting feature of HEVC is that the main profile supports only 8 bit per component color with 4:2:0 chroma subsampling. Also interesting is that 4:4:4 video is handled specially. Instead of having the luma samples (representing the image's pixels in grayscale) and the Cb and Cr samples (indicating how to alter the grays to create color pixels), the three channels are instead treated as three monochrome images, one for each color, which are then combined during rendering to produce a full-color image.

HEVC is a proprietary format and is covered by a number of patents. Licensing is managed by MPEG LA ; fees are charged to developers rather than to content producers and distributors. Be sure to review the latest license terms and requirements before making a decision on whether or not to use HEVC in your app or web site!

[1] Internet Explorer and Edge only supports HEVC on devices with a hardware codec.

[2] Mozilla will not support HEVC while it is encumbered by patents.

The MPEG-4 Video Elemental Stream ( MP4V-ES ) format is part of the MPEG-4 Part 2 Visual standard. While in general, MPEG-4 part 2 video is not used by anyone because of its lack of compelling value related to other codecs, MP4V-ES does have some usage on mobile. MP4V is essentially H.263 encoding in an MPEG-4 container.

Its primary purpose is to be used to stream MPEG-4 audio and video over an RTP session. However, MP4V-ES is also used to transmit MPEG-4 audio and video over a mobile connection using 3GP .

You almost certainly don't want to use this format, since it isn't supported in a meaningful way by any major browsers, and is quite obsolete. Files of this type should have the extension .mp4v , but sometimes are inaccurately labeled .mp4 .

[1] Firefox supports MP4V-ES in 3GP containers only.

[2] Chrome does not support MP4V-ES; however, Chrome OS does.

MPEG-1 Part 2 Video

MPEG-1 Part 2 Video was unveiled at the beginning of the 1990s. Unlike the later MPEG video standards, MPEG-1 was created solely by MPEG, without the ITU's involvement.

Because any MPEG-2 decoder can also play MPEG-1 video, it's compatible with a wide variety of software and hardware devices. There are no active patents remaining in relation to MPEG-1 video, so it may be used free of any licensing concerns. However, few web browsers support MPEG-1 video without the support of a plugin, and with plugin use deprecated in web browsers, these are generally no longer available. This makes MPEG-1 a poor choice for use in web sites and web applications.

MPEG-2 Part 2 Video

MPEG-2 Part 2 is the video format defined by the MPEG-2 specification, and is also occasionally referred to by its ITU designation, H.262. It is very similar to MPEG-1 video—in fact, any MPEG-2 player can automatically handle MPEG-1 without any special work—except it has been expanded to support higher bit rates and enhanced encoding techniques.

The goal was to allow MPEG-2 to compress standard definition television, so interlaced video is also supported. The standard definition compression rate and the quality of the resulting video met needs well enough that MPEG-2 is the primary video codec used for DVD video media.

MPEG-2 has several profiles available with different capabilities. Each profile is then available four levels, each of which increases attributes of the video, such as frame rate, resolution, bit rate, and so forth. Most profiles use Y'CbCr with 4:2:0 chroma subsampling, but more advanced profiles support 4:2:2 as well. In addition, there are four levels, each of which offers support for larger frame dimensions and bit rates. For example, the ATSC specification for television used in North America supports MPEG-2 video in high definition using the Main Profile at High Level, allowing 4:2:0 video at both 1920 x 1080 (30 FPS) and 1280 x 720 (60 FPS), at a maximum bit rate of 80 Mbps.

However, few web browsers support MPEG-2 without the support of a plugin, and with plugin use deprecated in web browsers, these are generally no longer available. This makes MPEG-2 a poor choice for use in web sites and web applications.

Theora , developed by Xiph.org , is an open and free video codec which may be used without royalties or licensing. Theora is comparable in quality and compression rates to MPEG-4 Part 2 Visual and AVC, making it a very good if not top-of-the-line choice for video encoding. But its status as being free from any licensing concerns and its relatively low CPU resource requirements make it a popular choice for many software and web projects. The low CPU impact is particularly useful since there are no hardware decoders available for Theora.

Theora was originally based upon the VC3 codec by On2 Technologies. The codec and its specification were released under the LGPL license and entrusted to Xiph.org, which then developed it into the Theora standard.

One drawback to Theora is that it only supports 8 bits per color component, with no option to use 10 or more in order to avoid color banding. That said, 8 bits per component is still the most commonly-used color format in use today, so this is only a minor inconvenience in most cases. Also, Theora can only be used in an Ogg container. The biggest drawback of all, however, is that it is not supported by Safari, leaving Theora unavailable not only on macOS but on all those millions and millions of iPhones and iPads.

The Theora Cookbook offers additional details about Theora as well as the Ogg container format it is used within.

[1] While Theora doesn't support Variable Frame Rate (VFR) within a single stream, multiple streams can be chained together within a single file, and each of those can have its own frame rate, thus allowing what is essentially VFR. However, this is impractical if the frame rate needs to change frequently.

[2] Edge supports Theora with the optional Web Media Extensions add-on.

The Video Processor 8 ( VP8 ) codec was initially created by On2 Technologies. Following their purchase of On2, Google released VP8 as an open and royalty-free video format under a promise not to enforce the relevant patents. In terms of quality and compression rate, VP8 is comparable to AVC .

If supported by the browser, VP8 allows video with an alpha channel, allowing the video to play with the background able to be seen through the video to a degree specified by each pixel's alpha component.

There is good browser support for VP8 in HTML content, especially within WebM files. This makes VP8 a good candidate for your content, although VP9 is an even better choice if available to you. Web browsers are required to support VP8 for WebRTC, but not all browsers that do so also support it in HTML audio and video elements.

[1] Edge support for VP8 requires the use of Media Source Extensions .

[2] Safari only supports VP8 in WebRTC connections.

[3] Firefox only supports VP8 in MSE when no H.264 hardware decoder is available. Use MediaSource.isTypeSupported() to check for availability.

Video Processor 9 ( VP9 ) is the successor to the older VP8 standard developed by Google. Like VP8, VP9 is entirely open and royalty-free. Its encoding and decoding performance is comparable to or slightly faster than that of AVC, but with better quality. VP9's encoded video quality is comparable to that of HEVC at similar bit rates.

VP9's main profile supports only 8-bit color depth at 4:2:0 chroma subsampling levels, but its profiles include support for deeper color and the full range of chroma subsampling modes. It supports several HDR imiplementations, and offers substantial freedom in selecting frame rates, aspect ratios, and frame sizes.

VP9 is widely supported by browsers, and hardware implementations of the codec are fairly common. VP9 is one of the two video codecs mandated by WebM (the other being VP8 ). Of note, however, is that Safari supports neither WebM nor VP9, so if you choose to use VP9, be sure to offer a fallback format such as AVC or HEVC for iPhone, iPad, and Mac users.

Aside from the lack of Safari support, VP9 is a good choice if you are able to use a WebM container and are able to provide a fallback video in a format such as AVC or HEVC for Safari users. This is especially true if you wish to use an open codec rather than a proprietary one. If you can't provide a fallback and aren't willing to sacrifice Safari compatibility, VP9 in WebM is a good option. Otherwise, you should probably consider a different codec.

Color spaces supported: Rec. 601 , Rec. 709 , Rec. 2020 , SMPTE C , SMPTE-240M (obsolete; replaced by Rec. 709), and sRGB .

[1] Firefox only supports VP8 in MSE when no H.264 hardware decoder is available. Use MediaSource.isTypeSupported() to check for availability.

Choosing a video codec

The decision as to which codec or codecs to use begins with a series of questions to prepare yourself:

  • Do you wish to use an open format, or are proprietary formats also to be considered?
  • Do you have the resources to produce more than one format for each of your videos? The ability to provide a fallback option vastly simplifies the decision-making process.
  • Are there any browsers you're willing to sacrifice compatibility with?
  • How old is the oldest version of web browser you need to support? For example, do you need to work on every browser shipped in the past five yeras, or just the past one year?

In the sections below, we offer recommended codec selections for specific use cases. For each use case, you'll find up to two reccommendations. If the codec which is considered best for the use case is proprietary or may require royalty payments, then two options are provided: first, an open and royalty-free option, followed by the proprietary one.

If you are only able to offer a single version of each video, you can choose the format that's most appropriate for your needs.The first one is recommended as being a good combnination of quality, performance, and compatibility. The second option will be the most broadly compatible choice, at the expense of some amount of quality, preformance, and/or size.

Recommendations for everyday videos

First, let's look at the best options for videos presented on a typical web site such as a blog, informational site, small business web site where videos are used to demonstrate products (but not where the videos themselves are a product), and so forth.

A WebM container using the VP8 codec for video and the Opus codec for audio. These are all open, royalty-free formats which are generally well-supported, although only in quite recent browsers, which is why a fallback is a good idea.

An MP4 container and the AVC ( H.264 ) video codec, ideally with AAC as your audio codec. This is because the MP4 container with AVC and AAC codecs within is a broadly-supported combination—by every major browser, in fact—and the quality is typically good for most use cases. Make sure you verify your compliance with the license requirements, however.

Keep in mind that the <video> element requires a closing </video> tag, whether or not you have any <source> elements inside it.

Recommendations for high-quality video presentation

If your mission is to present video at the highest possible quality, you will probably benefit from offering as many formats as possible, as the codecs capable of the best quality tend also to be the newest, and thus the most likely to have gaps in browser compatibility.

A WebM container using AV1 for video and Opus for audio. If you're able to use the High or Professional profile when encoding AV1, at a high level like 6.3, you can get very high bit rates at 4K or 8K resolution, while maintaining excellent video quality. Encoding your audio using Opus's Fullband profile at a 48 kHz sample rate maximizes the audio bandwidth captured, capturing nearly the entire frequency range that's within human hearing.

An MP4 container using the HEVC codec using one of the advanced Main profiles, such as Main 4:2:2 with 10 or 12 bits of color depth, or even the Main 4:4:4 profile at up to 16 bits per component. At a high bit rate, this provides excellent graphics quality with remarkable color reproduction. In addition, you can optionally include HDR metadata to provide high dynamic range video. For audio, use the AAC codec at a high sample rate (at least 48 kHz but ideally 96kHz) and encoded with complex encoding rather than fast encoding.

Recommendations for archival, editing, or remixing

There are not currently any lossless—or even near-lossless—video codecs generally available in web browsers. The reason for this is simple: video is huge. Lossless compression is by definition less effective than lossy compression. For example, uncompressed 1080p video (1920 by 1080 pixels) with 4:2:0 chroma subsampling needs at least 1.5 Gbps. Using lossless compression such as FFV1 (which is not supported by web browsers) could perhaps reduce that to somewhere around 600 Mbps, depending on the content. That's still a huge number of bits to pump through a connection every second, and is not currently practical for any real-world use.

This is the case even though some of the lossy codecs have a lossless mode available; the lossless modes are not implemented in any current web browsers. The best you can do is to select a high-quality codec that uses lossy compression and configure it to perform as little compression as possible. One way to do this is to configure the codec to use "fast" compression, which inherently means less compression is achieved.

Preparing video externally

To prepare video for archival purposes from outside your web site or app, use a utility that performs compression on the original uncompressed video data. For example, the free x264 utility can be used to encode video in AVC format using a very high bit rate:

While other codecs may have better best-case quality levels when compressing the video by a significant margin, their encoders tend to be slow enough that the nearly-lossless encoding you get with this compression is vastly faster at about the same overall quality level.

Recording video

Given the constraints on how close to lossless you can get, you might consider using AVC or AV1 . For example, if you're using the MediaStream Recording API to record video, you might use code like the following when creating your MediaRecorder object:

This example creates a MediaRecorder configured to record AV1 video using BT.2100 HDR in 12-bit color with 4:4:4 chroma subsampling and FLAC for lossless audio. The resulting file will use a bit rate of no more than 800 Mbps shared between the video and audio tracks. You will likely need to adjust these values depending on hardware performance, your requirements, and the specific codecs you choose to use. This bit rate is obviously not realistic for network transmission and would likely only be used locally.

Breaking down the value of the codecs parameter into its dot-delineated properties, we see the following:

The documentation for your codec choices will probably offer information you'll use when constructing your codecs parameter.

  • Web audio codec guide
  • Media container formats (file types)
  • Handling media support issues in web content
  • Codecs used by WebRTC
  • RFC 6381 : The "Codecs" and "Profiles" parameters for "Bucket" media types
  • RFC 5334 : Ogg Media Types
  • RFC 3839 : MIME Type Registrations for 3GPP Multimedia Files
  • RFC 4381 : MIME Type Registrations for 3GPP2 Multimedia Files
  • RFC 4337 : MIME Type Registrations for MPEG-4
  • Video codecs in Opera
  • Video and audio codecs in Internet Explorer
  • Video and audio codecs in Chrome
  • Autoplay guide for media and Web Audio APIs
  • Guide to streaming audio and video
  • Media type and format guide: image, audio, and video content
  • Digital audio concepts
  • Digital video concepts
  • Image file type and format guide
  • The "codecs" parameter in common media types
  • Using audio and video in HTML
  • Using images in HTML
  • Mapping the width and height attributes of media container elements to their aspect-ratio
  • a. Send us an email
  • b. Anonymous form
  • Buyer's Guide
  • Upcoming Products
  • Tips / Contact Us
  • Podcast Instagram Facebook Twitter Mastodon YouTube Notifications RSS Newsletter

X Rolls Out Passkeys Support to iPhone Users Worldwide

X, formerly Twitter, has extended support for passkeys as a login option for iPhone users across the globe, the company has announced .

X twitter logo

Passkeys are both easier to use and more secure than passwords because they let users sign in to apps and sites the same way they unlock their devices: With Face ID , Touch ID , or a device passcode. Passkeys are also resistant to online attacks like phishing, making them more secure than things like SMS one-time codes.

Apple integrated passkeys into iOS in 2022 with the launch of iOS 16, and it is also available in iPadOS 16.1 and later as well as macOS Ventura and later.

To set up passkeys in X on ‌iPhone‌, follow these steps:

  • Log in to the X app.
  • Click Your account in the navigation bar.
  • Select Settings and privacy , then click Security and account access , then Security .
  • Under Additional password protection , click Passkey .
  • Enter your password when prompted.
  • Select Add a passkey and follow the prompts.
Update: Passkeys is now available as a login option for everyone globally on iOS! Try it out. https://t.co/v1LyN0l8wF — Safety (@Safety) April 8, 2024

X is just one of several companies to implement support for passkeys in recent months, with other supporting apps and websites including Google, PayPal, Best Buy, eBay, Dashlane, and Microsoft.

Get weekly top MacRumors stories in your inbox.

Top Rated Comments

hagar Avatar

We need to set a date to drop the “formerly Twitter” out of the lexicon.

Pakaku Avatar

Popular Stories

iOS 18 Siri Integrated Feature

iOS 18 Will Add These New Features to Your iPhone

iGBA Feature

Game Boy Emulator for iPhone Now Available in App Store Following Rule Change [Removed]

top stories 13apr2024

Top Stories: M4 Mac Roadmap Leaked, New iPads in Second Week of May, and More

new best buy blue

Best Buy Opens Up Sitewide Sale With Record Low Prices on M3 MacBook Air, iPad, and Much More

Apple removes game boy emulator igba from app store due to spam and copyright violations, apple's first ai features in ios 18 reportedly won't use cloud servers, next article.

android find my device

Our comprehensive guide highlighting every major new addition in iOS 17, plus how-tos that walk you through using the new features.

ios 17 4 sidebar square

App Store changes for the EU, new emoji, Podcasts transcripts, and more.

iphone 15 series

Get the most out your iPhone 15 with our complete guide to all the new features.

sonoma icon upcoming square

A deep dive into new features in macOS Sonoma, big and small.

ipad pro 2022 square upcoming

Revamped models with OLED displays, M3 chip, and redesigned Magic Keyboard accessory.

Apple iPad Air hero color lineup 220308

Updated 10.9-inch model and new 12.9-inch model, M2 chip expected.

wwdc 2024 upcoming square

Apple's annual Worldwide Developers Conference will kick off with a keynote on June 10.

ios 18 upcoming square

Expected to see new AI-focused features and more. Preview coming at WWDC in June with public release in September.

Other Stories

M4 iMac Feature Teal

15 hours ago by MacRumors Staff

iOS 18 Siri Integrated Feature

3 days ago by MacRumors Staff

No 13 Inch M3 MacBook Pro Feature 2

4 days ago by MacRumors Staff

AirPods Max Gen 2 Feature Dark Red 2

4 days ago by Tim Hardwick

Next Generation CarPlay Porsche 1

Actualizar el iPhone o iPad

Obtén información sobre cómo actualizar el iPhone o iPad a la versión más reciente de iOS o iPadOS.

Puedes actualizar el iPhone o iPad a la versión más reciente de iOS o iPadOS de forma inalámbrica.

Si la actualización no aparece en el dispositivo, usa la computadora para actualizarlo manualmente. Obtén información sobre cómo actualizar el dispositivo de forma manual si usas una Mac con macOS Catalina o versiones posteriores, o si usas una Mac con macOS Mojave o versiones anteriores, o una PC con Windows .

Actualizar el iPhone o iPad de forma inalámbrica

Realiza un respaldo del contenido de tu dispositivo con iCloud o una computadora.

Conecta el dispositivo a una fuente de alimentación y a Internet con Wi-Fi .

Ve a Configuración > General y, luego, toca Actualización de software.

Si aparece más de una opción de actualización de software disponible, selecciona la que desees instalar.

Toca Instalar ahora. Si aparece la opción Descargar e instalar, tócala para descargar la actualización, ingresa el código y, luego, toca Instalar ahora. Obtén información sobre qué hacer si olvidaste el código .

ios-16-iphone-14-pro-settings-general-software-update-download-and-install-also-available-ios-17

Si aparece una alerta al actualizar el sistema de forma inalámbrica

Obtén información sobre qué hacer si aparece un mensaje de alerta al intentar actualizar el dispositivo de forma inalámbrica .

Algunas actualizaciones de software no están disponibles de forma inalámbrica. Es posible que las conexiones VPN y proxy impidan que el dispositivo se conecte a los servidores de actualización.

Si necesitas más espacio al actualizar el sistema de forma inalámbrica

Si aparece un mensaje en el que se indica que debes eliminar apps temporalmente debido a que el software necesita más espacio para la actualización, toca Continuar para permitir que se eliminen las apps. Una vez completada la instalación, estas apps se volverán a instalar automáticamente. Si tocas Cancelar, puedes  eliminar contenido de forma manual  del dispositivo para tener más espacio.

Personalizar las actualizaciones automáticas

El dispositivo puede actualizarse automáticamente durante la noche mientras se carga.

Activar las actualizaciones automáticas

Ve a Configuración > General > Actualización de software.

Toca Actualizaciones automáticas y, luego, activa Descargar actualizaciones de iOS.

Activa Instalar actualizaciones de iOS. Tu dispositivo se actualiza automáticamente a la versión más reciente de iOS o iPadOS. Es posible que algunas actualizaciones se deban instalar de forma manual.

ios-17-iphone-14-pro-settings-general-software-update-automatic-updates

Instalar las respuestas rápidas de seguridad

Las respuestas rápidas de seguridad ofrecen importantes mejoras de seguridad más rápido, antes de que se incluyan en futuras actualizaciones de software.

Para obtener respuestas rápidas de seguridad automáticamente, haz lo siguiente:

Toca Actualización automática.

Asegúrate de que la opción Respuestas de seguridad y archivos del sistema esté activada.

Si no deseas que las respuestas rápidas de seguridad se instalen automáticamente, puedes instalar las respuestas rápidas de seguridad como actualizaciones de software .

Si tienes que eliminar una respuesta rápida de seguridad:

Ve a Configuración > General > Información.

Toca Versión de iOS.

Toca Eliminar respuesta de seguridad.

Puedes reinstalar la respuesta rápida de seguridad más tarde o esperar a que se instale permanentemente como parte de una actualización de software estándar.

Cuando se actualiza un dispositivo a la versión más reciente del software de iOS o iPadOS, se obtienen las últimas funciones, actualizaciones de seguridad y correcciones de errores. No todas las funciones están disponibles en todos los dispositivos o en todos los países y regiones. Es posible que el rendimiento de la batería y el sistema se vea influenciado por varios factores, como el estado de la red y el uso individual del dispositivo. Los resultados reales pueden variar.

safari vp8 support

Contactar con el Soporte técnico de Apple

¿Necesitas ayuda? Ahorra tiempo iniciando una solicitud en línea al soporte técnico y te pondremos en contacto con un experto.

IMAGES

  1. Safari to support VP8 video codec from v12.1, improving compliance with

    safari vp8 support

  2. How to enable or disable WebRTC VP8 codec in safari on iPhone 6

    safari vp8 support

  3. 新版Safari支援VP8格式以及Unified Plan配置,將更符合WebRTC 1.0標準

    safari vp8 support

  4. Safari TP 67 に VP8 対応が入った. Safari TP 67 から VP8 を WebRTC SFU…

    safari vp8 support

  5. VP8 finalement pris en charge dans Safari… pour WebRTC uniquement

    safari vp8 support

  6. iOS Safari 13 で WebRTC VP8 codec がオプションで無効になっているがなぜか利用できる

    safari vp8 support

VIDEO

  1. safari

  2. 1980s SAFARI BUILT FOR BORNEO SAFARI & MORE!

  3. Расширения для Safari на iPhone: для чего нужны, как установить? 5 лучших бесплатных расширений

  4. Обзор VK 2.2 для iOS 8 + Как вернуть доступ к вашим аудиозаписям?!

  5. Rules of conduct at Dubai Safari Park

  6. Parachute for Skywalker X8 (Парашютный модуль для беспилотников на примере Skywalker X8)

COMMENTS

  1. Web video codec guide

    VP9 is one of the two video codecs mandated by WebM (the other being VP8). Note however that Safari support for WebM and VP9 was only introduced in version 14.1, so if you choose to use VP9, consider offering a fallback format such as AVC or HEVC for iPhone, iPad, and Mac users.

  2. Safari to support VP8 video codec from v12.1, improving ...

    The VP8 video codec is widely used in existing WebRTC solutions. It is now supported as a WebRTC-only video codec in Safari 12.1 on both iOS and macOS betas. By supporting both VP8 and H.264, Safari 12.1 can exchange video with any other WebRTC endpoint. H.264 is the default codec for Safari because it is backed by hardware acceleration and ...

  3. Codecs used by WebRTC

    Safari 12.1 introduced support for VP8 within IRC, which improves interoperability, but at a cost—VP8 has no hardware support on iOS devices, so using it causes increased processor impact and reduced battery life. Performance. Fortunately, VP8 and AVC perform similarly from an end-user perspective, and are equally adequate for use in ...

  4. WebM video format

    WebM video format. Multimedia format designed to provide a royalty-free, high-quality open video compression format for use with HTML5 video. WebM supports the video codec VP8 and VP9. 1 Older browser versions did not support all codecs. 2 Older Edge versions did not support progressive sources. 3 Can be enabled in Internet Explorer and Safari ...

  5. Safari 17.4 Release Notes

    Added support for all of HTML's character entities in WebVTT. (51064890) Added support for VP8/VP9 and WebM on iOS and iPadOS. (64825245) Added WebCodecs HEVC support. (112067287) Added MediaStream support for white Balance Mode. (115552800) Added support for the Vorbis audio codec on iOS, iPadOS, and in visionOS. (116776158)

  6. New WebKit Features in Safari 12.1

    WebKit support for the VP8 video codec is only available in WebRTC. The codec improves website compatibility, permitting video exchange with a wider range of WebRTC endpoints. Because H.264 has hardware support on Apple hardware and has been tuned for power efficiency, WebKit continues to use H.264 as its default codec in WebRTC for a better ...

  7. On the Road to WebRTC 1.0, Including VP8

    I am excited to announce major improvements to WebRTC in Safari 12.1 on iOS 12.2 and macOS 10.14.4 betas, including VP8 video codec support, video simulcast support and Unified Plan SDP (Session Description Protocol) experimental support. VP8 Video Codec. The VP8 video codec is widely used in existing WebRTC solutions.

  8. Apple adds WebM video playback support to Safari with macOS ...

    Although nearly all current web browsers support WebP video, Apple has never added WebP support to Safari. Back in 2010, Steve Jobs argued that the WebM format was "a mess" and that Apple had ...

  9. Apple adds WebM Web Audio support to Safari in latest iOS 15 beta

    An open-source initiative, WebM presents a royalty-free alternative to common web video streaming technology and serves as a container for the VP8 and VP9 video codecs. As it relates to Safari ...

  10. Reeling in Safari on WebRTC

    On the more controversial side, Safari does not support VP8 or VP9, just H.264. The IETF specs say WebRTC Browsers must support VP8 and H.264, so this puts Safari out of spec here (not to mention causes some applications problems - follow this bug report for more info).

  11. "vp8"

    Image format (based on the VP8 video format) that supports lossy and lossless compression, as well as animation and alpha transparency. WebP generally has better compression than JPEG, PNG and GIF and is designed to supersede them. AVIF and JPEG XL are designed to supersede WebP. 1 Partial support refers to not supporting lossless, alpha and ...

  12. Safari for Mac to Support WebM Video Playback 11 Years ...

    Safari features support for WebM video playback in the second beta of macOS Big Sur 11.3 Beta, ... The webm version 11 years ago was based on VP8 or maybe even an earlier codec. Today's base codec ...

  13. OpenTok Version 2.16: What's New and How You Can Use It

    Safari VP8 Video Codec Support. A key new feature of Safari v12.1 includes VP8 support on iOS and macOS! In the last few updates, we have addressed some interoperability challenges. In OpenTok v2.12, we added Safari and H.264 codec support. However, in some cases VP8 and H.264 may not have been supported.

  14. Guide to WebRTC with Safari in the Wild (Chad Phillips)

    Beware, no VP8 support. UPDATE 2019-08-18: Support for VP8 has now been added as of iOS 12.2. ... There wasn't any support for Safari / iOS safari in it, I added adapter.js (shim) to make my application run on Safari and iOS (Safari). After adding it worked perfectly on Safari and iOS, but when more than 2 persons join the call, The last ...

  15. Apple adds WebP, HDR support, and more to Safari with iOS 14 ...

    Safari is getting great improvements this year with iOS 14, iPadOS 14, and macOS Big Sur. Apple shared a document this week detailing all the changes made to the Safari 14 Beta, which now includes ...

  16. Developing for Safari 11

    Safari, from version 12.1, includes support for VP8 and VP8 simulcast. twilio-video.js 1.2.1 will automatically offer VP8 when supported by Safari. However, if you are looking at adding VP8 simulcast on Safari 12.1+, twilio-video.js 1.17.0 or higher is required.

  17. [SOLVED] Play .webm video on safari

    Safari hasn't fully implemented Webm support yet, so not much you can do until Apple adds that in place. There used to be a number of plug-ins for desktop Safari that enabled Webm playback but support for plug-ins has been dropped. For the moment if you would like to play a transparent video in Playcanvas you could use some form of chromakey ...

  18. MediaRecorder is not supported in Safari Browser

    2. MediaRecorder API is supported in Safari from version 14.1 (Released 2021-04-26). There are 2 ways how to handle this issue: Inform user that Safari update is required for this functionality (easy, but not so delicate and not acceptable if old Safari support is required); Implement a server-side recording - send media stream via WebRTC to ...

  19. Safari

    Support app. Get personalized access to solutions for your Apple products. Download the Apple Support app. Learn more about all the topics, resources, and contact options you need to download, update and manage your Safari settings.

  20. Agora Releases VP9 Video Support for Safari

    October 30, 2023 By Ben Weekes In Business, Developer. Agora is excited to be the first real-time video platform-as a-service (PaaS) provider to release full support for VP9 in browsers including Safari. Full VP9 support comes with the release of Web SDK 14.9.2. VP9 provides twice the quality of the VP8 codec for the same bitrate.

  21. Safari 15.0 VP9 support

    But Safari has updated to version 15.0, which claims support for the VP9 codec, which allows you to watch 4K content on YouTube. For some reason, I do not have such an opportunity - the maximum available resolution is 1080p. This is weird as my friend uses an iMac on Catalina and has 4K. My device is a MacBook Pro 16" late 2019.

  22. Safari Browser Support

    Apple added support for WebRTC in Safari 11 for macOS and Safari on iOS 11, and you can now use OpenTok.js apps on Safari. Additionally, Safari 12.1 supports the VP8 video codec, in addition to H.264. VP8 support in Safari 12.1 ships on macOS 10.14.4, and it is also available for macOS 10.13.6 and 10.12.6.

  23. Web video codec guide

    Safari; VP8 support: 25: 14: 4: 9: 16: 12.1: MSE compatibility: Yes 3: Container support: 3GP, Ogg, WebM: RTP / WebRTC compatible: Yes; VP8 is one of the spec-required codecs for WebRTC: Supporting/Maintaining organization: Google: Specification: ... Aside from the lack of Safari support, VP9 is a good choice if you are able to use a WebM ...

  24. Apple touts major 60% leap in Safari and WebKit performance

    Apple's WebKit team says that it has successfully improved Safari's Speedometer 3.0 score by ~60% between the release of Safari 17.0 in September and Safari 17.4's release in March. These ...

  25. iOS 18 May Feature All-New 'Safari Browsing Assistant'

    Wednesday April 10, 2024 6:11 am PDT by Joe Rossignol. iOS 18 will apparently feature a new Safari browsing assistant, according to backend code on Apple's servers discovered by Nicolás Álvarez ...

  26. X Rolls Out Passkeys Support to iPhone Users Worldwide

    To set up passkeys in X on ‌iPhone‌, follow these steps: Click Your account in the navigation bar. Select Settings and privacy, then click Security and account access, then Security. Under ...

  27. Actualizar el iPhone o iPad

    Si necesitas más espacio al actualizar el sistema de forma inalámbrica. Si aparece un mensaje en el que se indica que debes eliminar apps temporalmente debido a que el software necesita más espacio para la actualización, toca Continuar para permitir que se eliminen las apps.