PTP Done Right for ST 2110: How to Avoid the Timing Pitfalls in IP Broadcast

December 03, 2025 00:37:32
PTP Done Right for ST 2110: How to Avoid the Timing Pitfalls in IP Broadcast
Broadcast2Post by Key Code Media
PTP Done Right for ST 2110: How to Avoid the Timing Pitfalls in IP Broadcast

Dec 03 2025 | 00:37:32

/

Show Notes

The latest episode of the Broadcast2Post Podcast tackles one of the most misunderstood and mission-critical components of any ST 2110 or IP broadcast deployment: PTP timing.

As studios, stadiums, universities, and live production environments move to IP, timing is no longer a mysterious box hidden in the rack room. It is now a live network service that impacts every camera, multiviewer, replay system, intercom, and audio device on your network. When PTP is not configured correctly, it shows up as “random glitches” that are anything but random.

In the blog and video, the Key Code Media engineering team walk through how PTP actually operates inside a modern 2110 facility, why timing failures appear the way they do, and the essential design decisions that keep your plant rock-solid. The team then goes hands-on inside a real PTP monitoring environment to show exactly what healthy timing looks like, how to spot drift, and which alarms matter before you go to air.

Read the full blog: https://www.keycodemedia.com/ptp-done-right-for-st-2110-wp/

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: 3, 2, red 1 welcome to Broadcast to Post the show where we break down the technology shaping how modern media gets produced, delivered and experienced. I'm Steve Dupay and today we're unpacking one of the most essential, yet often overlooked building blocks of every ST2110 and IP based facility. PTP timing. At its core, PTP, or Precision Time Protocol is the clock that keeps every device in your live production environment in perfect sync. Cameras, switchers, replay audio and multi viewers all rely on that shared precise timing to move uncompressed video and audio across an IP network without drift, delay or packet chaos. In the SDI world, Blackbirds or Tri Level Sync handled all that for you. In ip, timing becomes a live network service and every part of your workflow depends on depends on it. And that is where the real challenge begins. As more newsrooms, stadiums and live production facilities adopt ST2110. PTP is no longer a set it and forget it sync generator in the rack room. It is something you have to design, monitor and maintain. When timing is off, you feel it everywhere. Glitches in the cameras, jitter in multi viewers, replay that behaves inconsistently and worst of all, audio that drifts or pots. They look random, but they almost always trace back to timing. At Kicode Media, we are inside ST2110 facilities every day, helping teams design, build and support timing architectures that stay stable long after day one. If your organization is planning or is troubleshooting an IP deployment, reach out to us. We would be happy to help you get your timing environment right before it becomes a problem. And in this episode, we want to give you the clarity and confidence to make PTP one of the least dramatic parts of your workflow. Which is exactly the goal. Here's what is ahead in today's session. Key considerations for PTP we will walk through the timing, network and vendor decisions every ST2110 team needs to get it right. PTP Health Check Demo Nick, Kumar and I will take you Inside a major all 2110 broadcast facility to show what proper PTP looks like in practice and how to spot trouble. We'll walk through offsets, jitter domains, grandmaster behavior, and the metrics that actually matter. Lastly, we'll cover what can go wrong and how to fix it. We'll wrap with a real world breakdown of the most common PTP failures, what causes them, and the configurations that lead to stable, reliable timing. By the end of this session, PTP will not feel like a mystery box. It will feel like something you can see, understand and keep under control. So let's get to it. [00:03:00] Speaker B: Have you ever watched a production and thought, wow, that lighting is perfect. But then the host stares into the void like a forgot how to read? Yeah, that's because they didn't use icann. See, ICANN makes top tier studio lighting, prompters and gear that keep productions looking and running smooth. So whether you need buttery soft LED lights or rock solid teleprompters so you don't forget your lines, or just pro level gear that won't let you down, Ikann has you covered. So if you want your production to look like a million bucks without spending a million bucks, check out icann. Because great lighting and a good prompter can make anyone look like a pro. [00:03:38] Speaker A: Let's shift into something every engineer should have in their back pocket. A clear checklist for getting PTP right in an ST2110 facility whether you're planning a new build, upgrading an existing plant, or troubleshooting sync issues, these are the fundamentals that keep your timing architecture stable and predictable. 1. Start with a clear PTP architecture plan before you buy hardware or start configuring switches, establish your PTP strategy. Most timing issues come from skipping this step. First, define your PTP domain. A domain is simply a way to separate timing groups so their messages don't Overlap. Devices in domain 0 ignore devices in domain 1, domain 2, and so on. What matters is consistency. Your grandmaster boundary clocks and endpoints must all be set to the same domain. If they aren't, sync will never stabilize. Next, determine what PTP profile you're using. A profile is a standardized rule set that defines message rates and behavior. In broadcast you'll typically see the AES67 default profile that's useful for native AES67 audio but not appropriate for ST2110 plants. SMPTE ST2059 22023 this is the Broadcast Media profile, including tlv support, AES R16 2016 Interop profile this one most engineers prefer because it ensures compatibility between AES67 and ST2059 two ecosystems. Then you ask, how many Grandmasters will I need? Most mid to large facilities rely on a redundant pair. Bigger or multisite deployments may require more, but two is the baseline. Finally, decide what clock types you're deploying. You'll encounter two typically in a broadcast facility, boundary clocks and transparent clocks. Boundary clocks the recommended approach. These are switches that look to an upstream grandmaster to act as masters for downstream devices, adding scalability and Jitter resilience Transparent clocks. They don't act as masters, they simply measure residence time and apply correction values. Useful in feeder switches, but not ideal as your main strategy. Best practice Use boundary clocks whenever possible and make sure your switches support PTP version 2, I.e. iEEE 1588 2008. Your entire timing foundation depends on these early decisions. Make them intentional. 2. Choose the right Grandmasters. Your Grandmaster is the heartbeat of the facility. Invest accordingly. Look for devices with GPS or GNSS locking redundant power hitless switching Support for the 211010 profile. And always deploy two grandmasters configured correctly for failover. This stabilizes BMCA behavior and protects against leadership flapping. A great example is the telestream SPG9000, which is widely deployed as both a PTP Grandmaster and a full sync reference generator. 3 Understand the best MasterClock algorithm. That's BMCA. BMCA determines which device becomes the active Grandmaster. If priorities aren't set correctly, you can get unexpected leadership changes. When that happens, you'll see it immediately in cameras, replay, multi viewers in and most importantly, audio. The best practice Is to set P1 and P2 priorities intentionally prevent endpoints from becoming Grandmasters. Test your backup GM to confirm it truly takes over and do all of this before you go live. Don't ever test it in a working plant. A few minutes of prep prevents hours of mystery troubleshooting later. [00:07:29] Speaker C: 4. [00:07:30] Speaker A: Protect PTP Packets with proper network configuration. PTP packets are tiny but extremely sensitive. They must have the highest QOS priority. At a minimum, separate PTP into its own vlan. Isolate or air gap timing traffic from heavy flows. Avoid congestion points near your Grandmasters or boundary clocks. If PTP is sharing space with databursts or unshaped multicast, timing will wander. 5. Monitor PTP continuously, not just at installation. This is the biggest blind spot we see. PTP is not a set it and forget it proposition. Use tools like Telestream Prism on your switch telemetry to monitor, master to slave, offset jitter, announce intervals, leadership changes and sync delays. Healthy PTP is predictable. If you see offsets drifting or values spiking, the fabric is telling you Something is wrong. [00:08:28] Speaker C: 6. [00:08:28] Speaker A: Validate everything in a staging environment before you bring your system online. Test your timing end to end. Simulate Grandmaster failover, network congestion, link loss, power cycling, switch reloads. If timing breaks in staging, it will break on game day. 7. Train your team for day two operations. PTP isn't a one time install. Your team needs to understand how to identify timing faults read switch logs and metrics, respond to Grandmaster failover, use Prism or similar tools, and verify timing after maintenance. Windows and this is where Keycode Media comes in. We don't just Design and build ST2110 systems, we make sure your engineering team knows how to operate them confidently long after day one. At KeyCode Media, we design and integrate complete broadcast and production environments nationwide from small studios to major sports houses. We help teams transition from SDI to IP, deploy SD2110 correctly, and support long term operations with training and service. To simplify planning, we've created a set of studio starter bundles matched to the most common needs. The Central Studio Bundle Ranging from 75,000 to 150,000, this bundle is great for YouTubers, corporate studios and training centers. Built on NDI and Tricaster workflows with PTZ cameras, compact audio and LED lighting. Newsroom Studio Bundle Ranging from 350,000 to 750,000, this bundle is built around ROS, Carbonite switching, ROS Ultrix routing, large QScript prompters, studio pedestals, hybrid SDI 2110 infrastructure and Dante enabled audio. It's ideal for campus stations and midsize newsrooms. Enterprise Newsroom and Sports bundle ranging from 1.2 million to 3.5 million. This is for high end demand operations needing ST2110 redundancy replay, major production cores from Ross or Everts graphics cameras and full PTP network redundancy. Perfect for major broadcasters and sports. If you're planning a new studio or upgrading an existing one, here's your next step. Complete the PTP key considerations we covered today, then reach out for a free consultation. We'll assess your space, workflow and timing requirements and map out the right design and budget for your goals. Contact KeyCode Media today and let's build a studio that performs on day one and stays rock solid on day 100. [00:11:13] Speaker B: This episode is brought to you by Ross Video from video switchers, graphics and routing Whether sdi, IP or even in the cloud, ROS makes live production easy. Trusted everywhere from the biggest sports stadiums to city council meetings, newsrooms and more. Are you ready to upgrade your workflow? Kicodemedia offers the best ROS video pricing and a free consultation so you get the right products the first time. Trust the experts at Kicode Media and book [email protected]. [00:11:46] Speaker A: Now that we reviewed the checklist and bundles, I want to show you what PTP looks like in the real world. Nick Kumar, our VP of Engineering, is here with me to discuss the practical side of PTP Nick, thanks for joining us for this part of the session. [00:11:58] Speaker C: Hi, Steve, Glad to be here this morning. [00:12:01] Speaker A: All right, we're going to remote into a major sports broadcast hub. [00:12:05] Speaker C: All right, so we will be connecting through TeamViewer and we will be using the Telestream prison for the demo. If you have not worked with Prism before, it is a very powerful IPNSDI monitoring platform and lets you see timing, signal, health packet behavior and SD2110 flows and real time. It is one of the best tools available for diagnosing PTP issues and confirming that your timing environment is behaving the way you expect. Big shout out to the telstream team for making this possible. With that, let's take a look at this facility's timing system and walk through what proper PTP looks like and how you can spot trouble before it takes you off the air. [00:12:40] Speaker A: So what are we looking at here, Nick? [00:12:42] Speaker C: Yeah, Steve. So what we have over here is the overview layout of a Telestream Prism. Right now we have a test signal that's routed to the Telestream Prism and we have various windows selected. For example, over here on the left, top left, we have the actual video being routed into this particular Telestream that I'm remoting into on the right over here you have a audio metering display. At the bottom left over here, this is essentially just telling us what the overall IP status is, what all the different multicast streams are being joined with that signal route, all the different types of 20, 30, 40 streams. And then on the right over here, I have a timing window that essentially just gives us some details on the stream timing itself. The good thing about this, the Prism is that's very highly customiz and you can change the layout of the different tiles. [00:13:36] Speaker A: How would you go about reviewing and assessing the PTP system? [00:13:40] Speaker C: Absolutely. So the first thing that you want to look at is if you go to the home, if you click on the home button over here, you're going to see general metrics. For example, you want to see over here that your video reference is locked and you have a good lock signal. So that basically tells us that at a very high level we have a good signal lock. The next thing you would want to look at is you would want to go into the PTP graphs, Right? So there is a tile over here where we just call PTP graphs. You click on graphs, it presents a histogram plot like this and gives you the overall master to slave delay for a system. And the key thing to Note over here is what you don't want to see is you don't want to see any drastic variations or any kind of sudden spikes or drops for this chart over a period of time. You'll see over here that the mean value is about 674Ns. So this is the master to slave delay. And what that basically means is the delay offset between the endpoint and the Grand Master itself and what the variation is. So you'll see that the values are in the order of magnitude of nanoseconds. So that in and of itself is actually a very, very stable and precisely locked system. Other things you want to take a look at is the Grandmaster id. For example, over here is listed along with the domain. So this should obviously match the Grand Master ID of your system and not that of the boundary clock. So we'll go over that in more detail as we go. You know, how to actually really design a system and put together a system with the different preppy priorities as well as Grand Master setups. So these are just some very basic metrics to look at. Again, key thing is you don't want to see any kind of major spikes, any kind of major dips in the scatterplot. The next thing that we would want to look at is the timing, right? So let's take a look at the timing window. This should be a very familiar window for a lot of broadcast engineers. Why is that? Because in SDI workflows, you are looking at the circle and this crosshair over here. And ideally you want the circle to be right at the center vertically. Your delay wants to be at zero lines. And now you'll notice over here that the delay is at 21 lines. There's a reason for that. In 2110, when you're monitoring 21, 1020 flows, you're actually monitoring, you're actually passing active video. So if you remember from old school, broadcast engineering school, you will know that for 1080i video, which in this case we're looking at a 1080i video stream, you will see that the start of active video is line 21. And that's why this vertical offset is at line 21 delayed. If this were a 720p signal, this would be at 26 lines. If this were a 1080p signal, it'd be at 42 lines delayed. And I must also show you a quick comparison between what this window would actually look like for SDI feed. So on this particular prism, I have an SDI input available as well. So this is actually monitoring the 2110 signal. If I come in over here and I click on this SDI one and I do a recall, right, it'll change the view slightly, but you'll see that the vertical offset is zero lines, right? And what that means is that the RP168 reference point is properly aligned with the start of the actual frame, not the start of active video. That's the key difference between looking at this window. When you're comparing an SDI signal with respect to a 2110 signal, you will inevitably see a vertical line delay, whether it's 21 lines, whether it's 26, 42, even 84 for UHD signals. But for SDI it's going to be at 0 lines. So let me go ahead and just go back and recall the IP input and let's get back to where we were. The next thing I would want to cover is I'd like to take a look at stream timing again. The stream timing is going to give you a scatter plot, if you would, or a histogram for all the different essence streams that is associated with this signal that's being routed into the QC prism. And at this moment, this is a sports network feed that's being routed into the prism. And you'll see that I have options to monitor what the video stream timing is, what the audio stream timing is, what the data stream timing is. And the key over here is again, you don't want to see spikes, you don't want to see any kind of major dips. And this video to PTP offset, this mean value, it wants to be around the 625 microsecond range. There's an interesting math behind this. The 622 to 625 microseconds. The way where that comes from the time equivalent of what that line delay is. And this is something called the TR offset. So in this case that is being represented by 21 lines. So if you actually do the Math For a 1080i video stream, line 21 is actually at about 622 microseconds. Line 26 and 720p would be exactly the same video to PTPRDP offset and same thing for 1080p video. This is the key number that you want to look at.622 microseconds. Now keep in mind that if this, if this number is higher or if it's, you know, if the TR offset or the actual video delay is at, let's say 31 lines delayed, it need not necessarily be a problem. But the key is, so long as the signal is locked, just like an sdi, so long as the signal is locked, it may not necessarily be locked to your house black, but so long as it's locked, you don't want to see any kind of sock patterns. Right? So that's one of the most crucial things about ptp. You want to see consistency, you want to see stability right. In your video to PDP offset, your video to RTP offset, and you can start looking at your audio streams as well and make sure that the audio to video offsets are also in the microsecond range. And then data, of course, you don't want to forget about data either. You want to make sure this is consistent as well, you know, so those are some of the key things that you want to be monitoring when you're looking at stream timing and PTP timing for actual essence streams. [00:19:45] Speaker A: Great. Something that kind of caught my attention there was, you know, you want to make sure that your, your IP and other configuration values are correct. Shouldn't you have like a quick reference sheet to refer to of how the system was supposed to be set up versus where it is now? [00:20:01] Speaker C: Absolutely. One of the most critical things about designing 2110 systems and PTP design is to go about properly planning for it. A lot of these things don't just necessarily fall together. They have to be designed and you have to properly plan for this. So one of the approaches that we took, we take is we document and we design for the Grandmaster configuration, the Grandmaster IDs, what types of priorities we're going to use beforehand and try to document as much as possible. Some of the tools that I use are some Excel spreadsheets that we would use in going about in designing a PTP on a 2110 system. So this should show you some samples. So this is a sample SD2110 network design spreadsheet that we use at Kiecomedia. Depending on the number of grand Masters that you have in your plant, most likely you would have two Grandmasters, A primary and A redundant. So it would be Grandmaster A and Grandmaster B. And over here you would essentially start documenting the different metrics. Right. So for gmaid is whatever the Mac address is of that particular Grandmaster. And then the GM ID of the Grandmaster B would be documented over here. Now, obviously you're not going to know what the Grandmaster ID is during design. So once the device is actually shipped, you can easily obtain what the GM IDE is. And the GMID is essentially just the Mac address. So you'll see that the Mac address over here ends at a B, this ends at a zero. And then when you actually look at the Grandmaster over here, you'll see that this is the Grandmaster AGM id. These are some of the things that you would want to document. And then of course there are other metrics too that you would need to document for a good PTP design. Determining the type of PTP profile you want to use. So there's PTP profile and the PTP profile that we're going to be using here is the Interop profile, as opposed to a S67 default profile or the SD2059.2 profile. The domain also very important in a networking environment with PTP distribution, you want to make sure that you are all within the same domain. There are areas where you can have multi domains, if you have multi sites, or if you have like multi production areas within a plant. That might necessitate different domains, but by and large the domain is usually the same for a production system. PTP priorities also very important. PTP Priority 1, PTP Priority 2 for the grand masters, and then subsequently determining what the PTP priorities are for all the downstream network switches that would be configured in boundary clock mode. Also very important because this is what's going to dictate who the Grandmaster is in the event of any failure. As part of the DMCA algorithm, which stands for the best master clock algorithm. Other things to consider is what the announced intervals are going to be, what the sync intervals are going to be, and what the delay request intervals are going to be. So there's a lot of planning that goes into place when it comes to determining and designing a proper P2P system. Another thing to basically consider is designing your network topology in a way that is scalable. Here's a sample of a network topology that we've used on a past project. And over here you basically want to take the data that we just presented and put it on a network topology view and you'll see that you have your PTP Grandmasters. In this case It's a telestream SMG Thousand with the priority set as P1 as 1, P2 as 1 and Grandmaster B. P1 is set to 1 and P2 is set to 2. One of the key considerations, obviously when you're designing a system, as I mentioned earlier, priorities, setting up the priorities is very, very important. You want to make sure that the priorities are set in a way such that there could be easy, there could be a Proper failover in the event of issues with Grandmaster A, whether there's a GPS lock or loss of GPS log for that matter. And then of course, all the downstream devices, the boundary clocks, the switches and everything. You also want to make sure that those PTP priorities are adequately provisioned for. So this is the next step that we would want to do as part of designing a system, building the overall topology in a more user readable and user friendly manner. And the last thing we want to go over is a sample SD2110 stress test checklist. This is a very, very important tool that we use on projects. Not only does it stress test the devices with their compliance to the 2110 set of standards, but it also tests and stress tests the overall system for scenarios like PTP failures. So for example, we have different tabs over here, and the first tab is a Device under Test checklist. So Device Under Test could be anything. It could be the endpoint. Right? Essentially that's what Device under Test is. It's, it's the actual endpoint. Could be a multi viewer, it could be a production switcher, it could be an IP gateway, could be an audio console, could be a graphics engine. All these are 2110 compliant devices that you would test against all these criteria. So as you'll notice over here we have different test types. For example, there's a 211010 PTP test type. We're going to do PTP tests, we're going to make sure that the devices are locked, they're locked to the proper domain. And we also want to make sure that we're getting the proper PTP delay request messages. And then if you scroll down this checklist, there's more comprehensive testing that we do for the 2110-20 video testing with all these different criterias attached to them. And then we do different tests for 2110 30. Then we do different tests for 2110 40. And then of course, then we do a final test for 20, 22 7, which is your seamless packet switching across your red and blue fabrics. So essentially it's redundancy in your SD2110 streams. And then the next thing we would want to do is we want to make sure that the network fabric is configured properly. Right. And you would want to simulate some of these failure tests within your network fabric. So we'll start off with a physical layer test and then followed by checking the interface layers. And then we're going to do an actual PTP failure test. And so these are some of the commands that we would use depending on the type of switches we're using. If we're using an Arista network or a Cisco network, the syntax for the command line interface may be slightly different, but the general idea is the same. We want to be able to mimic this stress testing before the system goes in production. Of course there's multicast testing that we would do, there's some routing tests that we would do, and then uplink tests. And then another thing I have over here just for everyone's edification, this is a test for ddm. If you are putting in a DANTE domain manager, these are just some simple test criteria to test against. [00:26:56] Speaker B: This episode is brought to you by Studio Network Solutions Media teams have enough things to worry about. Storage shouldn't be one of them. That's where Studio Network Solutions comes in. SNS makes your shared storage, media management and cloud workflows easy so you can focus on what you do best creating. See how SNS can help your [email protected]. [00:27:22] Speaker A: All right Nick, now that we've looked at what healthy PTP looks like, let's zoom out and talk about the reality most teams face. PTP is one of those things that feel simple on paper, but in the real world I should say there are lots of ways it can go sideways and when it goes wrong it impacts everything. Cameras, replay, multi viewers, audio tallies, you name it. [00:27:45] Speaker C: Yes, absolutely. In IP facilities, PTP and timing are the foundation of a stable and reliable media fabric. The network architecture should be designed with proper planning in mind for current scale and future growth. Improperly designed systems, misconfigured endpoints, or even a failure of an upstream Grandmaster can lead to PTP related issues which can then entirely rarely manifests itself in video glitches, audio drops, lips and drifts, etc etc. So therefore it is very important to not only invest in proper design planning, but also PTP and network monitoring tools to keep a health check on your facility. As I mentioned earlier, one of my mentors used to tell me that good luck is the result of a good design and it is a philosophy I try to subscribe to every day. [00:28:29] Speaker A: What we see in most ST2110 deployments is that PTP problems fall into a few categories network PTP architecture mistakes, configuration errors and assumptions, and the endpoint behavior. The tough part about all of this is that these issues usually stack up on top of each other and you're not always aware of how or where exactly the problem is located. [00:28:52] Speaker C: Believe it or not, the Network architecture need not necessarily be the biggest culprit when it comes to PTP problems. It certainly can turn out to be a huge problem if the underlying network is not designed using PTP. NSC2110 best practices now a lot of these best practices would include building your network topology with scalability and flexibility in mind, configuring downstream network switches in boundary clock mode, for example, ensuring that PTP priorities for the Grandmasters and boundary clocks are set correctly and tested for proper failovers. I recently saw an issue with a TV station with a GMA priority one was set to zero and GMB priority one was set to one and there were some GPS related lock issues on GM1. So you would typically see Grandmaster B take over as a Grandmaster in this type of situation due to a better clock accuracy, but it was not possible since priority one on Grandmaster A had a higher priority of zero, so causing it to always stay active as primary gm. So these are the kind of things that we need to be mindful of when we are doing design of PTP systems. A lot of other times it usually boils down to misbehaving endpoints due to configuration errors. For example, the domain may not be set correctly on the endpoint causing it to cause video flickers or audio drifts. Or perhaps There is an AES67 compliant device that expects a different PTP profile with different messaging rates that the boundary clock is sending. [00:30:26] Speaker A: So that means that there's some common symptoms maybe that we can take a look at to see what's going on. I think that's where people get confused. When timing drifts it rarely whispers got PTP problem? It usually screams like oh and you're in a panic to get it resolved. [00:30:42] Speaker C: Yeah, absolutely. You'll see jitter or video drops in multi viewers, program outputs, cameras dropping frames, replay acting sluggish or inconsistent, audio dropping, the drifting a bit or encoders freezing for a frame or two. Unfortunately if you are troubleshooting a problem like this you have to move your way upstream in the chain from narrowing down to the issue at the endpoint level, ensuring the endpoint is not misconfigured or something does not look off. Then moving up the boundary clock or network switch it is connected to and ensure proper PTP messaging is taking place and there are no issues at the boundary clock level with BMCA related grandmaster changes. And then keep working your way up to identify root cause. A small PTP related misconfiguration can be very problematic indeed, especially if you're in an on air production environment. [00:31:32] Speaker A: And you know another area where facilities get into trouble is mixing standards. We like to think everything in an ST2110 plant just plays nicely together. The timing profiles are not identical across audio, video and legacy gear, correct? [00:31:45] Speaker C: Oh yeah, absolutely for sure. So this goes to the part of the design where in order to build a reliable system one must not only architect the network properly but also understand the endpoint requirements. There are as 67 native devices that require the AES67 default IEEE1588 2008 profile, whereas most ST2110 native endpoints commonly use the SynthesC2059.2 profile. If you have a mixture of both types of endpoints, I recommend using the AES R16 2016 PTP profile, which is a PTP profile that provides interoperability between the AES67 and 2059. Two profiles files with a common range of messaging intervals. AES67 native endpoints can also become the PTP master, whereas 2110 does not allow endpoints as taking over as PTP masters. Therefore, it is important to make sure that PTP role masters are set on your boundary clock network interfaces to disallow endpoints from becoming PTP masters and causing timing issues across your system. Of course there's always the DANTE factor most near DANTE devices with firmware versions 4.2 or later can support external PTP v2 clocks. However, in a lot of applications you may end up with one DANTE device in the domain that acts as a DANTE leader for that domain with a PDPV2 upstream connection. This endpoint can then act as a PTP V1 boundary clock to other DANTE endpoints in that domain. You would need something like DANTE Domain Manager to orchestrate this type of boundary clock into the DANTE world, so these are some of the things to be mindful of. [00:33:18] Speaker A: And of course well architected network and BTP design are essential for proper SD21 tenant operation. But what are some of the best practices to which you should pay attention to get it right and avoid on air issues? [00:33:32] Speaker C: Some most basic things are to make sure that you have the latest firmware or software versions on your PTP Grandmasters and network fabrics and of course all your corresponding endpoints. From a PTP standpoint, pay special attention to the priorities P1 and P2 in your grandmasters and downstream boundary clocks. Make sure that the endpoint facing boundary clock interfaces have PTP role masters enabled so that a rogue endpoint cannot become a PTP master. Also make sure to conduct proper PTP failure tests before launch and simulate various failure scenarios. This is the best time to test application against theory, of course. Needless to say, please do not do a PTP failure test on a live broadcast system. And from a network configuration standpoint, at a very high level, Try to use Layer 3 routed systems with each interface configured as a Layer 3 port on a 30 subnet. This is a great method to limit your broadcast blast radius and to also protect you from stuff like multicast Mac address aliasing and also ensuring multicast fast drops for endpoints sending multicast occurs that are misconfigured with the wrong ip. [00:34:40] Speaker A: And then there's the classic everything was working until someone updated the switch or device's firmware and then everything seems to go to pieces. [00:34:49] Speaker C: Oh yeah, absolutely. And that's where proper maintenance best practices come into play. Obviously this would include testing failure scenarios adequately and referring back to your documented results of failure testing before launch for confirmed expected behavior anytime. Even if there is a minor change, remember causality, A change somewhere in the chain can cause a downstream endpoint or element in the path to misbehave. That is why when maintenance needs to be done to a system, it must also be properly planned, especially if you are in a live production environment. [00:35:22] Speaker A: Thankfully, the tools have gotten much better for monitoring and analysis. Telestream's Prism is one of the best ways to visualize offsets, jitter domains and BMCA behavior. You immediately see where timing is coming from and whether or not it's stable. [00:35:36] Speaker C: Oh absolutely. On the infrastructure side, for example, Grand Masters from companies like Telestream or even Mindberg or Brainstorm are extremely reliable and for network routing and switching. Earth and Cisco are two most consistent platforms for predictable boundary clock behavior and strong PTP visibility. Streaming telemetry tools for advanced network monitoring also becomes an essential troubleshooting tool. For example Arista, Cloud Vision, Portal, Cisco, nbfc, or even something like Providious nbrt. [00:36:06] Speaker A: Yeah, so if we boil all of this down, PTP usually fails when architecture configuration and network don't properly align. It's really one specific issue. It's usually a combination. [00:36:20] Speaker C: That's right, and the key is proper planning whether you are building a new system or about to perform maintenance on an existing system. Other important factors are consistency, consistent profiles, consistent firmware, and continuous monitoring. If you treat PTP as a living network service instead of a box in a rack, the entire system becomes dramatically more stable. [00:36:41] Speaker A: Thanks a lot for going through all this with me today, Nick. I sure learned an awful lot that I thought I knew, but there's some key points here that I didn't. So thank you again. These are definitely insights teams need to know before they end up troubleshooting their timing in the middle of a show. [00:36:57] Speaker C: Absolutely. Thank you, Steve. Thank you for having me. [00:37:00] Speaker B: This episode is brought to you by Avid Technology, the industry standard for storytellers. Hollywood editors, chart topping musicians and newsrooms rely on Avid. From Media Composer to Pro Tools, shared storage to Newsroom tools, Avid keeps productions moving. Kie Code Media delivers the most competitive Avid pricing plus a free consultation to ensure your workflow is built right the first time. Count down the trusted team at Keycode Media Book today at keycodemedia. [00:37:31] Speaker A: Com.

Other Episodes

Episode

April 13, 2023 00:43:45
Episode Cover

OTT Live Streaming | Live, On-Demand, Internal, External Video

In this episode of Broadcast2Post, we speak with Product Marketing Manager at Telestream, Don Kianian about Sherpa Stream. #Sherpastream #Telestream #broadcast2post SUBSCRIBE TO THE...

Listen

Episode

November 22, 2023 00:41:11
Episode Cover

Editors Lounge | Avid Media Composer Post Production

A fantastic panel of new and familiar faces will share their insights and experiences as Video Editors. Going over the different tools found in...

Listen

Episode

November 21, 2022 01:02:17
Episode Cover

Top Trends in AV Technology

In this episode, we’ll look at the new trends in AV technology- including hybrid meeting spaces, auto-tracking cameras, interactive displays, UC control, LED walls,...

Listen