[00:00:00] Speaker A: Foreign.
[00:00:17] Speaker B: Hello, hello and welcome. This is our Top trends in 2024 recap live and Post Production Innovations Episode of Broadcast to Post. It's our final episode of the year.
But why am I here? We're shaking things up for this special end of year recap. Normally I'm behind the scenes, I'm off camera, I'm in the control room producing the show, taking your questions in the YouTube chat, asking for questions on the YouTube chat. But today I'm stepping into the spotlight. Will I regret it? Probably. But that's a story for another time. We'll see how it goes. This episode will be less of a panel discussion, we're having more of a roundtable conversation. Our goal is to help you cut through the noise, uncover the tools, workflows and vendors that seamlessly integrate into your systems, helping you budget and plan for your teams in the new year.
So helping us all make sense of this is our top expert contributors from KIE Code and the Broadcast to Post podcast. We're going to start with Steve Dupay, our senior director of broadcast innovations. Steve leads some of the most complex broadcast projects, including work for sports and newsrooms, PBS stations and government clients. Then we have Jeff Sangfill, our chief technologist. Jeff oversees all pre build engineering across the company and has a bird eyes view of every integration project happening. Then we have Michael Kammes, Senior Director of innovation. Michael recently rejoined KIE Code Media that's been a driving force in the creative post production and finishing technology space. He'll be joining us a lot more in the second half when we get into post. And if you haven't already checked it out, Michael has a wonderful YouTube channel series called Five Things. Click and subscribe on that after we wrap up here. And lastly, we brought the chief himself, Mike Cavanaugh, the president of Key Code media. With over 30 years of experience in business, technology and vendor relationships, he's got a lot of great insight in all aspects of the industry. Mike's main focus is ensuring that the technology you invest in delivers measurably monetization benefits. He's also our boss and we're excited to have him here. And of course behind the scenes. And this is the first time we've ever done this in our control room. We have a control room camera with Chase Baker, J.J. klein and Andy Agua, who also help with these podcasts, will be contributing applause to everyone.
Applause, applause.
Okay, hopefully that sound effects went through all right. So we're just going to dive into it starting with live production.
We're going to kick it off with.
[00:02:53] Speaker C: Trends in live and broadcast technology. When it came to key trends, there was a few big headlines that stole the show. Auto tracking cameras, leveraging face and body detection were a major trend this year. AI has improved accuracy, making subject tracking more predictable. On the high end, products like Ross Video launched the Vision AIRY Studio robotic control system for it's going to be great for newsrooms while Advanced Image Robotics kind of stole the show introducing the Air1, a cloud controlled system delivering cinematic image quality that can be controlled remotely on an iPad. For budget conscious buyers, PTZ cameras From Sony, Panasonic, JVC now feature face and body text at price under 1500 bucks. IP based standards and ethernet infrastructure continued to dominate upgrades in new and aging facilities. SMPTE 2110 led the way in high end applications for newsrooms, sports and outside broadcasts. Blackmagic Design showcased 15 new SMPTE 2110 ready products at NAB 2024. NDI made waves with NDI 6 adding HDR 10 bit color and WAN improvements for NDI bridge perfect for multi site workflows. NDI compatible all in one switchers also emerged including the long awaited 12G Ready Tricaster Vision, Quick Link Studio Pro and ROS Video's first NDI Ready Carbonite code. Remote production saw a big leap forward thanks to advancements in 5G and Starlink. Major events like the Paris Olympics outsourced control rooms to the US while political events across the US used remote contribution for live appearances nationwide. Speaking of the Paris Olympics, this year's game set the benchmark for new workflows and technology from HDR capture and Dolby Atmos immersive sound to AI powered multi cam replays and fully remote operations using 5G. Let's get into it.
[00:04:54] Speaker B: All right, so I'd like to start with the Paris Olympics because I think it is just such a big event and incorporates some of the highest end technology. It's definitely the flagship broadcast of the year. The Olympic tends to lead the way in leveraging new technology and new workflows.
Steve, what were the new noteworthy technologies used at the Olympics this year that you would expect to see more of in live sports and other live productions coming into the new year?
[00:05:32] Speaker D: Certain happily to answer that one, Matt, while there's a lot of innovations, I think there's kind of a key overarching thing that we got to pay attention to at first and that is that the biggest takeaway from the Olympics was that the total all around progress across the entire production workflow using these new technologies brought the viewer closer to the action was more immersive and set standards that the sports broadcast industry is going to lean into going forward for a long time to come. Some of the key ones, as you mentioned in the pre roll, were things like 5G that enabled us to put cameras in places that we've never been before. And then using Starlink as a backhaul, be able to get that in near real time back into the production facility so it could be properly switched cut, et cetera, for the production value. The IP technologies embedded with 2110 enabled a lot of that content to be moved around easily and quickly and made it so that the overall infrastructure could be quite flexible and repurposed. There's another part there that's kind of more a little high end, and that is technologies such as microservices running in kubernetes containers on standard servers were reused quite a bit at the Olympics, making it possible to lower the cost of ownership, to get the Olympics done and then sell those boxes off for other uses later on and then repurpose them as needed by simply recreating a new workflow with a new customer. So all these technologies converge to make probably the most immersive, most invigorating Olympics I think that we've seen in a very long time.
[00:07:22] Speaker B: Jeff, awesome. And I know you, you and Jeff were both at the SVG Summit, right? I think you're both still in New York or not. You're back, Steve. Jeff, you're. You're still out there. Were you able to attend any of the Olympic sessions, Jeff, and any, anything you saw there?
[00:07:39] Speaker A: The, the one thing I saw about the Olympics that was interesting is the bulk of the control room was done out of Stanford, Connecticut, despite the fact the Olympics were in Paris. So it's remote production. It's not cloud based production, but remote production. NBC has a giant facility they built out there for a large number of the other properties, like working with NFL and the NBA and all those other things. And they just repurposed it for the Olympics and added on capacity. It's one of those facilities that's always doing something extreme. Extremely cool.
[00:08:16] Speaker D: You know, Jeff, that's a really good point. That NBC, who was featured at SVG in a session, made good use of the technologies to be able to do that in remote production. But that same thing was duplicated by numerous countries throughout the world because it was easy to subscribe to the streams in a multicast environment. The infrastructure didn't have to get out of control to be able to give everybody those kinds of feeds and be able to have their own production, remote production capabilities wherever they were located in the world. So that that's really advancing the capabilities of production across the globe. But it can also be applied here in the US as well, being able to send home feeds, enough home feeds for an away team so that they can do the same level of production that the home team is able to do.
[00:09:09] Speaker B: Steve, wasn't there something about AI multicam replay systems? Like how, how is that, do you know anything about that part of the Olympics? And, and do you think this is going to be now used on Sunday Night Football? Or maybe it's already being used in Sunday Night Football.
What are your thoughts on the AI replay stuff?
[00:09:27] Speaker D: Yeah, so AI had a, had a significant part to play in this. Although I wouldn't, I would say that the Olympics could be more known as the 5G Olympics than an AI Olympics where AI really came into, into play was being able to, like with the cameras, being able to track the action and also be able to put up information quickly on, you know, the speed of a ball being hit or position of a swimmer, all that other content that helps you to stay immersed in the action that's going on. So leveraging those kind of AI technologies to improve camera angle, camera image, image quality, audio even, even to the point where there was text conversion going on, is now embedded with the audio so that you have metadata that you can use in post production to do a better job of getting the audio right. You know how it was being collected, you know who the participants were, all that kind of information, so you can get a much better use of that material and content going forward.
[00:10:34] Speaker B: Wow, that's incredible how it's even able to track the players like that and help you just kind of predict that next graphic and stuff like that. Well, is there anything else, does anyone want to add around the Paris topic or do you guys want to move on to the next one?
[00:10:50] Speaker D: I think the one other thing I wanted to make sure that we, we covered on this is that these converged technologies are, are now making their way down into the, the lower range. HDR is, is dominant now in all kinds of production levels, whether it's at the, the very high end for sports, but all the way down into, you know, what NDI is doing with NDI6 and their HDR capabilities there. That actually has more value in production, broadcast production in particular, than going full 4K. So that combination of 4K capture with HDR is fantastic. But HDR itself really brings the immersiveness if you will, that color depth really stands out significantly. So I want to make sure we mention that these technologies are quickly making their way down into the AV marketplace as well.
[00:11:45] Speaker B: Yeah, yeah. Not just the hdr, but there was also the immersive audio with the Dolby Atmos. Right. And the audio. And so anyone who went to Costco and bought a soundbar that's, that's Dolby Atmos ready and you loaded up the right channel, the light right app that was the Dolby Playout had a pretty cool sound experience for the Olympics.
[00:12:06] Speaker D: Exactly, very cool.
[00:12:09] Speaker B: All right, well, let's keep the pace going. Our next topic is auto tracking cameras. Auto tracking cameras with face and body detection have been a big focus this year. A standout example is the Advanced image robotics air1 which combines PTZ functionality with a digital cinema camera and high end lens. It allows pre programmed movements, auto tracking and remote control of all of that from an iPad which seems super useful for non technical teams like high schools and colleges. I'm going to start off with Mike Cavanaugh on this one. How do you see advancements in auto tracking PTZ cameras and auto tracking camera heads shaping future productions? I know initially these cameras, we mainly use them in lecture halls, right. In conference rooms, very simple static setups. But newer models seem to be geared towards that professional production like sports. So what trends are you noticing and how have you seen that progress?
[00:13:12] Speaker E: So absolutely I, I walked out of NAB in 2024 and thought, you know, or walked into NAB and I thought AI was just going to be everywhere and it really wasn't in the post production and broadcast side, going back to Vegas in June and walking into Infocom, everybody had AI and these companies like GATA Video were able to basically effectively switch an entire show without having a single person touch or operate it based on creating AI constraints. What I really think is exciting with the automation on PTZs is it's coming down to and I, as a lot of you follow me on social media, I throw in td, a church service twice a month where I have five PTZ cameras, one main Sony camera and the operation. And I'm really looking at how AI because they're sometimes in a volunteer organization level, I'm stuck both, you know, working the blackmagic ATEM as well the PTZ optics. And it's very confusing especially for volunteers to come in at the last minute, find five minutes before you're going live that oh, you're now doing a secondary job. Where AI becomes more interesting is the merging of NDI. So the TriCaster vision has 44 inputs of NDI into a system where it doesn't make sense to have, from a cost level, 44 different camera operators across, you know, the production. But with AI, you literally can have signals everywhere and anywhere. And a lot of that work can be done on an automated level. So you don't even have to manage camera call outs while you're tracking and switching a show. With 44, the ROS code, which is also very exciting product is 30 inputs. And the real big difference that I'm seeing is defining the NDI signals where both ROS and Panasonic Kairos are full ndi, which means uncompressed. And that amount of data going through the process is substantially greater, but the latency is less as compared to the HX versions in ndi, than impressed that take a little longer to decode. But it's a lot like 1/5 the pressure on the network. And we're seeing a lot more of the HX cameras out there on the marketplace. So seeing a lot of, you know, the greater levels of automation with AI as well, you know, the digital video signals in IP as an input is really changing how much coverage you can have on an event.
[00:15:48] Speaker B: And Mike, I think we should show everybody you're on an auto tracking camera.
[00:15:51] Speaker F: Correct.
[00:15:52] Speaker B: If you want to move around, I think you got to get up actually now Mike's going to be our model. So everyone give Mike a round of applause while he walks around. Yeah, nice. So that's tracking and you're able to walk throughout the space. We put him in Jeff's office today, so sorry, Jeff, you're away.
[00:16:13] Speaker E: I've already graffiti. I can't get the blinds down. Sorry about that.
[00:16:20] Speaker B: Yeah, and actually, I wouldn't mind asking the control room guys about this Chase. Jj, you guys are been setting up all these auto tracking cameras out in the field.
What are your thoughts on these auto tracking cameras? What do customers need to think about when they're setting it up or how you guys are setting it up for customers?
[00:16:40] Speaker G: What we've been seeing is the first incarnation. What, to be honest, what we're using on Mike is kind of an earlier model of the auto tracking, where these are great for full body standing up, giving lectures in classrooms and that type of deal. They've been fabulous for that. It's just the very latest cutting edge generation now we've been getting to the point where we can actually use these cameras in more cinematic and more interesting ways. Where, you know, be it for sports or that Kind of thing. Whereas you know, previously we were stuck with, you know, just full body shots sitting, you know, standing in front of a class.
[00:17:19] Speaker A: Especially with like what Sony's doing with their auto tracking. AI auto tracking for facial recognition, everything. It's really developing into something that, you know, you're not just following an object, but it's actually becoming smooth and it could be a usable shot as a TD that you like. I could cut to this live and watch that, you know, auto tracking as, as kind of a replacement for a manned operator.
[00:17:39] Speaker G: Yeah. And one of the big things is it can actually recognize individual faces. So even when I, even when I'm going behind Chase or whatever, it knows that that's the guy that it'll just keep following me. And they call it something like face facial invention or something like that.
[00:17:55] Speaker A: Based on their AI training model.
[00:17:57] Speaker G: Yeah, based on their AI models it'll, it knows you're there even when it can't see you.
[00:18:01] Speaker A: So.
[00:18:01] Speaker G: So like if, if your face goes away, it'll follow your legs and knows where your face should be while you walk behind something.
[00:18:08] Speaker B: Yeah, well, and maybe we can ask Steve. I know there's like the Vision AI.
[00:18:14] Speaker C: Re Visionary Ross video.
[00:18:18] Speaker B: They demonstrated that at NAB this year on the higher end. So a lot of what we've been talking about is these pretty low cost PTZs that we can use for churches and schools. But now we get up to the big broadcasters and they're going to need certain kind of lenses and stuff like that.
What have you seen from MRMC and from Ross and these kind of higher end innovations with tracking.
[00:18:45] Speaker D: So what we're seeing is similar type of technology but applied at a more macro scale so that you can track individual players on a field. Once they enter the field, we know where they're at and we can track them the entire time. We can keep the camera count down rather than having to have individual cameras for each player, if you will. By tracking where they are in the field across six or eight cameras, we can create shots particularly using 4K resolutions. We can create shots that are usable regardless of where they are in the field. Even if they leave the field, we know where they went and when they come back.
As a result, we're able to create replays in a much faster manner. We're able to give you stats and information. I'm sure you've seen it watching NFL games where we've got little circles sitting underneath them and we can pull up all their stats and that kind of stuff. We're at the point now where we can give that same kind of information about an at bat in, in baseball. And you can not only see the speed of the ball coming up at, how he's moving his hands, how he's done that in, in previous at bats, the trajectory of the ball, all that kind of stuff is now possible because of this AI capability within the hardware and software, tracking the motion, tracking the individual objects and tracking faces. It's pretty amazing.
The other thing I'll mention on that is that again, you'll hear me talk about immersiveness quite a bit. It creates a better experience, a more immersive experience for the consumer of that video. But it also makes it so that the TV can spend more time focusing on the creative look and feel of the shot, rather than focusing on, hey, is the guy ready with it? He knows that it's going to have that information in it, so he knows he can switch and take it when he, when he needs to. As a result, we're getting into a much higher production value as a result of having the right shots at the right time. And as that trickle, that technology trickles down to what we're seeing, even with the tracking for Mike, it makes it so that that production value, even at the lower end is possible and quite successful.
[00:20:57] Speaker B: Yeah, yeah, it's amazing. I just remember this was what, maybe two years ago, we were setting up a conference room at key code where we're trying to get a camera to track who is speaking in the room. And you put up one of those Sennheiser microphones in the conference room. And you said, okay, this see zone A, this sees zone B and just kind of followed them across the room. It's almost getting to the point where you can just pre program these faces and you don't even need that sound recognition in zone. You just go, okay, go to Bill, go to Bob and just follow the faces in the room. And I know on the AV side we saw just recently some pretty interesting technology where you can even pull up four or five people in the conference room. And it split screens.
You see everyone with.
[00:21:46] Speaker G: Yeah, they have a really cool integration where they have one camera at the front of the room, keeps track of who's talking, and then you have five cameras all over the room. And it will just entirely automatically put multiple faces in the call, all split screen in a really cool way.
[00:22:00] Speaker D: Yeah, we're actually gifted.
[00:22:02] Speaker B: Yeah, go ahead.
[00:22:02] Speaker D: That type of thing, Matt. Now, for colleges and universities, for a classroom environment where we can auto track individual students, you put several cameras up and it will bring. If that student begins speaking or answering a question and lecture, he'll get zoomed in on. And if it's, if it's part of a remote call, then he'll be emphasized. But the professor can see every student on every camera and then he'll bring it to the floor automatically so he doesn't have to focus on highlighting anybody. It just all happens. And that point of view that needs to be focused, all that AI now can make it possible for him to see who's talking and what kind of response they have. So now he can see if that student really had the answer correct or be able to give that student maybe more help if they need. A lot of the rest are very interested in that type of approach.
[00:22:51] Speaker A: Yeah. And they're also doing that as a digital. So you still have the wide shot and then digitally it will zoom into the person who's speaking you just one camera angle. The actual zoom of the lens doesn't change corrections in digitally to the person.
[00:23:05] Speaker D: That's one of the advantages of 4K cameras. You may not be producing a 4K, but capturing the image in 4K allows you to move anywhere within that frame to capture what you need for an individual event.
[00:23:18] Speaker A: Very cool.
[00:23:19] Speaker B: Thing is I'm going to move on if that's okay. We're going to try to perish along to IP based workflows. Jeffrey, you're actually up on this one. So IP based workflows, another huge topic this year, are becoming the norm. With a range of protocols to choose from, budget friendly government education projects, NDI often leads away. For example, we had a city that saved 600k using NDI instead of a baseband run for their facility refresh this past year. It's on our website if you want to look for it. City of Aurora on the other hand, SMPTE 2110 dominates high end uncompressed workflows for broadcast newsrooms and sports. And we're going to get to that with Steve in a little bit. But let's start with ndi. Jeff in the control room do most of these installs.
Jeff with control Room does most of these installs. What trends are you seeing in NDI installations at Keycode Media? Are there noteworthy new products getting installed or Updates? I know NDI6 came out. What stands out to you as far as NDI in 2024?
[00:24:27] Speaker A: Sure, there's a couple things there and we're actually up to 6.1. It just came out a little bit ago. One of the things that's interesting Is, you know, we were talking about the HDR and how it's trickling down. The thing is, a lot of people don't realize just how important that is. If you look at Mike Cavanaugh and you look at me, we are victims of lack of hdr. I can make a building appear.
Here it is.
So the thing is, if I was in an HDR camera, you would see my face. You would have no blur. You'd see the building, you'd see everything.
So this is getting down deeper into, you know, any place that's got, you know, Windows is a back. A back plane. So you've got people that are using this technology for new and better ways and be able to see more and see more clearly because, you know, as you can tell, my focus is soft because of the light behind me. The other thing we're also seeing is.
[00:25:27] Speaker D: The.
[00:25:30] Speaker A: NDI split away from Newtek when Viz purchased them in 2019. And what happened at that point is the ecosystem became open. It went from like, I don't know if anyone ever had an iPhone one, but there were only Apple apps at that point. And finally they opened it up and everyone was. It just allowed all sorts of really cool stuff to happen.
So the fact that it's opened up has. And it's now finally reached its legs because for a while, it was a little more tied together with viz still. They had some people that were working in the two organizations, and now that that's open, you're getting things like Carpet I code, you're getting the quick link stuff. And it's opened up a lot more choices for people to be able to pick different.
Different ways of working. You're not stuck.
You're not stuck living in a new tricaster world because it's ndi. There are other tools out that are. That are doing that, and they're adding all this stuff on. The other really cool thing that's come up is bridge. So bridge came out in NDI5, I believe. And what it was then was, okay, you had a computer, and you had a computer in the network sitting behind it, all the NDI stuff. And those two computers talked to each other. That was a connectivity. So what we've done now is they are putting bridge into devices.
So instead of having to have it connect to a computer and this entire bunch of things, you can have a number of different sources that then go. They bridge themselves back to the core network, which is across the land. It's, you know, it could be across the city, across the state, across the country. And those sources now can add in because the bridge technology is built into the device. It's not part of the network. So in, in this model that you're showing there, the each device in Network B would individually connect. There's not a central choke point because as we discovered back in the day when we used to do events, that choke point is a little difficult.
[00:27:46] Speaker G: Yeah, I, I were gonna stay. I was gonna. I just did. It was a really small radio station. It was a radio show with a couple hosts. And one of the hosts moves back and forth between like, I think Oregon and Palm Springs all the time. And so she just drags her PTZ back and forth with her. Using Bridge makes it makes it a cinch to just bring her video and audio straight into the studio. And we now don't have to do any level of configuration once we programmed in where the bridge was onto the camera. You don't need to do any level of configuration to a computer or to anything like that. It makes it a lot cleaner to bring that video back and forth the way, the way you want it.
[00:28:31] Speaker A: Yeah, simple enough for talent to do it. That's the whole point.
[00:28:35] Speaker G: And if Talon can do it.
[00:28:37] Speaker B: Mike Kavanaugh. Mike, I was going to say you've been deep in these video switchers that Jeff's talking about. I mean, it seems like Tricaster was really the only one for a while. I mean, there was other options as well. But recently you had the Quick Links Pro, which kind of filled in this TC2 gap while the Tricaster was waiting to release the Vision Pro. But it has a host of ton of other features that are kind of on that newer td. You know, it's got kind of that newer sense, like the kind of VMIX kind of features in it that might attract a newer, a younger user. And then you have obviously on the raw side, the Carbonite code. So what are you thinking, Mike, on these video switchers that are coming out that are just all NDI ready and where do they fit?
[00:29:25] Speaker E: Well, I think everybody's really struggling with that. And I think the real key thing is where vmix, quicklink and VIZRT are all able to handle the HX signal. And you know, in going out there and you know, talking with Ross and Panasonic specifically, you know, they are still trying to get go from full NDI to hx. And I think that's a big issue because the full NDI cameras are substantially more expensive. Most of the clients have already purchased a lot of HX that they just want to Integrate into their network, into their switch system and have low cost systems. And the conversion is a little bit wonky. We're able to do it. But we are talking extensively with the NDI team with ROS Video, with Panasonic to say really lean into HX and take advantage of that. Because at the end of the day, even though it's live broadcast and that everyone says that's what full NDI is for, we know by the end user at the end point of either streaming on YouTube or in your home off a cable system, you know, you can be up to 20 seconds delayed on the picture that you're seeing as compared to when it happens. So that concept really working through and having, you know, ubiquity between camera sources and not mattering whether it's HX or full NDI or integrating, you know, SMPTE 2110, you know, very interested in the panel's take on what Dante AV is doing and how that may impact, you know, IP networks for cameras in the future as well.
[00:31:00] Speaker B: Yeah, you mentioned Dante. That's another one that we were going to talk. Maybe if we get to the questions later from the audience, I think we'll get into that.
Really cool. So ndi6hdr, more ndi video switchers available and bridge just connecting different control rooms, different cameras to control rooms, Bridge. Really nice. And as we were talking, sorry, I.
[00:31:24] Speaker G: Was, we were talking about HX Bridge will also just convert HX streams to full NDI as a stopgap. So that's one of the, that's one of the really awesome things is you NDI has all sorts of quick fix functionality that a lot of, you know, you would take a whole rack of format converters. Now it's just a PC in your rack.
[00:31:45] Speaker B: Yeah.
All right, well, I'm going to move it over to Steve. Steve, let's, let's move to SMPTE 2110, the other option that has really been blowing up in the last few years. I know you've been involved in transitioning sports stadiums and newsrooms and PBS stations to 2110. And I know it's been getting a little bit easier over the years. We mentioned earlier that Blackmagic introduced 15 new 2110 products in NAB 2024, which is kind of a canary in the coal mine type situation. Are we finally, Steve, are we finally starting to see mass adoption of 2110 for facility upgrades that have traditionally gone to be like a 12G baseband?
[00:32:32] Speaker D: Yeah, we are. And it's because the tools are coming down in price.
And the production value is starting to really show to the fore. Some of the earliest adopters of it were in live production around trucks. Simply reducing the amount of physical cabling that had to be on a truck was significant.
Taking 30% of the weight of the truck just for the cabling that was inside. And that's been reduced by 60, 70% in some cases.
The ability to get multiple signals on a single piece of fiber, you know, 100k or 100 gig connections to the, to the spine of the network, those types of things are now affordable. And reducing the total number of cables needed in a plant. As an example, a facility we just, we are just completing now for KMGH in Denver, Colorado.
Their total wire count is down significantly and the amount of space needed in the, in the, the wire trays is quite low compared to what it would have been with a 12G SDI type of approach. So there's all kinds of other factors that tie into that. Another one is the, the power reduction. You've got lots of subscribers to a given IP signal, and so you're using that multicast, you're able to reduce the number of individual feeds going to devices. Everybody's just picking up that information on the network and getting what they need to process or to display, whatever else might need to be done. So that's finding its way down into smaller and smaller productions. I've got a client that does this kind of corporate work, corporate AV work and reducing the number of cables that they have to carry in their equipment rack. Whether they have to run into a facility to do imag and video work for corporate events, whether they're in a hotel ballroom or a large NFL stadium, it's made a huge impact on what they're able to do. The other part is once you're in that IP domain, your ability to access services running on standard servers and computers radically increases. Chase just mentioned that it used to be that you had a bunch of individual pieces, you had to have to do some of the production stuff. Now it's just a PC that you're using that's really come to the fore and is only going to get better. That's lowering the cost of adoption as well as enabling you to leverage new technologies as they come out.
The biggest problem we've got right now is the adoption of NMOs. We're seeing that grow, but nowhere near as fast as the adoption of SMPTE 2110 by itself. The NMOS part is critical as well, because that makes it so that all the people's you know, every, every vendor's products can talk more easily to each other. It's, you know, a unified control platform. And that's actually the window of opportunity that Dante has. It's that the audio control portion was limited in the audio world. They've kind of satisfied that and now they're seeing that, hey, it's not that tough to do video essence on an IP packet. So they're developing their own standard there. But it's really based around how they can control it. So the control and orchestration part is just as big, it's just as important as the transport of the packets themselves. The good news on that front is that we got people all the way down to Elgato with their Stream Deck products, the Stream Deck studio that are incorporating NMOs so that they can now have direct control in a low cost orchestration layer of 2110 installation. So for the AV guys, they can now have easy access to it, affordable access as well as it'll satisfy the needs of a high end sports production facility or television station.
[00:36:35] Speaker B: Wow, that's really interesting.
Does anyone else have any thoughts on 2110 and how it's grown this year? My only comment, Steve, I remember we've been talking about this for years now. It just seemed like back even and just a couple of years ago it was like, oh, it's all there, but it's a prompter or this camera needs this adapter and it just kind of seems like it's all native now. But anyone have Some thoughts on 2110 and what you've seen and how this technology has grown?
[00:37:03] Speaker E: Well, I can say on my side, Matt, both between NDI and 2110, that the traditional video engineer skills have to really evolve and really get that IP IT CCNA updates and recognize that it's no longer plugging in a cable into a router or patch bay, but you know, real time having to do updates and understanding what the technology platform is, understanding your staff skills and ensuring there's a progress so that they can be trained and efficient and having the appropriate people. There are a lot of efficiencies of 2110. I mean, we're pricing out 12G and the limitations of how far copper cable can go and also how much it weighs and just brings down the facility where Steve can give some, you know, input on, you know, one Cat 6A cable is now porting multiple streams of video and audio. But conceptually understanding and looking at, you know, the drawings from a Vectorworks or an AutoCAD is a different way now to approach how signal flow works in an IP world. So I think a big issue will really be transitioning, you know, the, the labor market and the staffing requirements to be trained up on that and really becoming, you know, Arista, Cisco, Netgear specific understanding signal flow is now data flow and, and how to troubleshoot that quickly and how to maintain it.
And I do have something I want to bring up about overall broadcast. I mean Nielsen's saying that the overall cable broadcasting is dropping at about 10% a quarter. And that's a significant aspect that, you know, the traditional stations are really challenged with having to make a pivot where you have things like the Joe Rogan experience that, you know, some people are saying that three hour interview that President elect Trump did, you know, flipped the election. So how communication is going on and literally for our clients out there looking and thinking about, you know, how I can create a studio, we literally can get you up and running for way below $10,000 where you can start having content and think about this as, you know, the copier machine of the 80s or the fax machine of the early 90s and, and how you're able to publish content, have it compelling. You know, a kudos to Matt and the entire marketing team. We just surpassed 5,000 subscribers on our YouTube channel. And if you look at it, we really pivoted during COVID and decided to start broadcasting, for lack of a better word, to quote a Steve Ballmer, eating our own dog food and really leaning into communicating our messages through video. And it substantially helped us grow our revenues from a business perspective. So even if you're not in the traditional world of broadcasting, really start building what is your media delivery strategy that you have and you know, do a plug of key code media. We can help you figure that out and be able to, you know, build an executive summary to your executive team on why they need to start investing in new ways to communicate to stakeholders, to end users, to internally within the company and really upping your game from production because when you have people working at home, you know, the, the fight for eyeballs is great and the more compelling the content you have and the more ability that you have to very quickly get your messaging out over video. Everybody can have their own broadcasting station now and everybody should.
[00:40:38] Speaker B: Great. And I'm, I'm going to move on to remote production and contribution now just because the time I'm looking at it, we got to keep moving. So remote contribution operators, contributors have seen significant improvements in quality and available tools. For instance, clearcom has Agent IC app allows crew members to use their iPhone as an intercom, enabling communication from virtually anywhere. So you have intercom anywhere. Similarly, we mentioned earlier that Advanced Image Robotics offers full remote camera control via an upgrade, an app as well, iPad app. Mike Kavanaugh, what trends are you seeing in the remote production and contribution workflow space? Are there any standout solutions that are transforming how teams operate remotely? LiveView, KillerView and so on?
[00:41:29] Speaker E: Yeah, I mean, that's exactly it. And we actually leaned into LiveView many years ago in the early 2010, where we had clients that were doing lower budget fights, you know, golden boy promotions. And then they all of a sudden realized they didn't need to move their control room. They could just send a camera operator out, have them with a couple live view packets and be able to do a show and remotely broadcast that, which substantially lowered their ability to go to market from a cost perspective, which is just really compelling. Once again, it's how can you have high quality content that's compelling and have it produced at the lowest possible cost and being able to have remote contributions? You know, we have a local television station here, one of the three letters that during COVID we set up their expression artists on a Ross workflow to be working fully from home. And you know, people want to work from home. It's their life of quality is better and the technologies, you know, Liveview has been definitely the leader. You know, integrating together 5G. Kilaview has found some things that are very exciting. We're working on our side with, you know, some chip providers where we're able to provide very low cost and, you know, pay as you go type models on the cellular 3G, 4G, 5G world and really trying to look at the end to end, allowing people to be anywhere, contribute anywhere, which lowers the cost and also increases the quality of the production.
[00:43:03] Speaker A: Nice.
[00:43:04] Speaker B: And I know we did talk quite a bit about remote production earlier. We talked about the Olympics, we talked about Starlink and 5G. But just before we move on, is there any other comments, Jeff or Steve, on remote production that you found compelling that was a little different this year from the past?
[00:43:22] Speaker A: Well, I mean, the liveview thing, not having to have satellite trucks, it's just huge. So Sports Video Group was at a Midtown Hilton and something happened there a couple weeks ago. So people continually were doing live shots out on 54th. You'd walk out, it's just two, two folks. One with a live view and the camera and the reporter, and that's it. It saves you having to buy A lot of vehicles with satellite trucks and then figure out where you microwave or beam stuff off of. So it reduces your, your, your ask in terms of eng.
[00:43:56] Speaker D: Yeah.
[00:43:57] Speaker G: Opportunities you wouldn't have otherwise. We had, we did a shoot from a boat where like you're not going to, you know, mounting a satellite on a boat is on. On a speedboat is not a thing you can do. Mounting a satellite, I mean, putting a couple live views a couple miles off the coast was a great time.
[00:44:14] Speaker D: Yeah. And then using that 5G network to backhaul it onto Starlink so that you can do remote production anywhere without a satellite truck and have it be much closer to real time. I was mentioning in some of our pre discussions that the typical latency on a Starlink connection is between 20 and 40 milliseconds, as opposed to 600 milliseconds for a geostationary link. That brings you into a single frame or two frames of delay so that you can actually cut something live using a Starlink type of backhaul link. And that's what we saw at the Paris Olympics. The cool part about that is low Earth orbit satellites are going to become ubiquitous. Starlink's not the only option that will be there in the future. So look for that to enable remote production even more.
[00:45:06] Speaker B: Awesome. All right, guys. Well, I'm really excited about what we're moving to next. We're moving to our broadcast and live production product, Lightning Round Sound Effect.
So each of you are going to quickly go over your favorite live production product for 2024. I'm going to put on my toad hat for this one as well.
So let's kind of go around. Steve, let's start with you. What was your favorite broadcast and live production product for the year?
[00:45:36] Speaker D: Well, this might surprise me. Surprise you. Me being kind of a high end technology geek, but my number one product this year was the Netgear M4350 series of network switches that are PTP capable and support Dante and other AV standards. The reason why I chose that one is that brings it down a switch fabric down into the affordability range for not just av, but for high schools even. And so we can see all this IP based production now able to be done simply and easily using affordable switch fabrics.
[00:46:15] Speaker B: Very cool. Jeff, how about you?
[00:46:17] Speaker A: There's so much cool stuff out in the broadcast world that came out this year, but I'm also going to go with the Steve thing.
The new Blue Effects has instant replay as part of the Captivate product. It's an extremely low cost of Entry to get into instant replay for high school or, you know, really low level college sports. And it, it's a great, great little product.
[00:46:44] Speaker B: Awesome. Mike, what about you?
[00:46:46] Speaker E: I like what quicklink came out with a product called Studio Edge, which effectively transforms how you can bring people in remotely along. You know, combination of remote guests having sort of that green room communication and having real time group conversations and I think that's really going to explode out in the market in 2025. I'm allowing people to have higher quality, better bass productions, including Key Code media.
[00:47:12] Speaker B: All right, all right, so that's going to wrap up our live production. Thank you everyone. Oh, I got to put the applause track on.
All right, so Chase, let's cue the script and let's get to post production.
All right, let's get to a few.
[00:47:32] Speaker C: Of the big trends in post production technology. The traditional TV and film market saw steep decline. Reports showed a 35% drop in projects from Q3 2024 when compared to pre strike levels in 2022. With slow strike recovery and increased outsourcing to the UK and Australia to blame AI for creative editing surged ahead. Adobe Resolve and Avid expanded beyond Transcription, introduced generative tools directly in their NLEs. Adobe Firefly and Enhanced Audios are now native to Premiere Pro in 25, while other platforms out there teased tools for rough cuts titles and scene extensions. AI for media management also gained traction, making MAM Systems even more useful. Companies like Periphery, Wasabi and Scalelogic improved integrations with tools like Iconic Frame IO and CatDV, enabling faster object face and dialog searches for creative teams. Remote editing has become a standard feature for post facilities, with tools like LucidLink enabling teams to scale during busy seasons. Shared storage providers like SNS at Facilis developed secure solutions for remote access to on Prem storage, further streamlining workflows. Let's get into it.
[00:48:49] Speaker B: All right, so we're going to start with the state of the media entertainment industry in 2024. It continues to face challenges with US production volumes slowing down 35% in Q3 2024 compared to 2022. Many post facilities are downsizing, shifting operations, going international. Jeff, what are your thoughts on the state of the M and E post production industry?
Is there a path to recovery for them?
[00:49:16] Speaker A: I mean, before this event, I was on LinkedIn. You know, every time someone pops up, hey, I got a job like, let's celebrate that we need to.
The thing I ran into, Mike likes to say that post production is a lag indicator. It's the last part of the process, so it's not going to necessarily begin to pick up. People at the SVG show were saying that stages in Los Angeles are busy, so hopefully that is the pre indicator. Meaning, hey, there's work coming.
But on the greater front I feel that. But it's going to need to be regulatory and tax based. One of the things that's happening is folks are running to places that have great tax incentives.
We see a lot of productions coming out of Georgia and there's a great amount of incentives for people to go there. So we need assistance from the governmental folks to help us get kind of a kick in the can to get the, the engine started again.
[00:50:24] Speaker B: Yeah. And Michael, comments. I know you've kind of been in the background, but now we're on post production so hopefully we're going to have more commentary. I wanted to hear what you're hearing from people on the ground level, but I'm also kind of curious as we're in this transitionary period, is it making people start looking more at breaking the rules a little bit Maybe they were really restrictive on cloud and now they're going to be less restrictive on cloud because it's cheaper in certain circumstances. Um, how do you see the technology being shaped by these, this post production trend?
[00:50:58] Speaker F: Well, if we look at some of the higher end players in New York and la, there's always been rules about what you can do in the cloud and what you can't, especially when you're working with someone else's content. So we kind of had to wait until those rules were kind of revamped. And that's what the TPN and TPN plus did. But during the pandemic, so much content was created with people sitting at home or sitting elsewhere that it kind of proved that it could be done. And that's only continued to be enabled this year. In fact, this year I don't know one production that I had worked with that didn't have people working remote and that meant can we take the tools that we've traditionally used and either forklift them to the cloud or augment them with cloud components to kind of cobble something together? And what we really found is that there almost has to be, let's go back to the drawing board and start from scratch. Because trying to bolt on different things often just made more trouble than it was worth. And so that's when the cloud solutions ended up popping up even more. And that dovetails directly into AI, because a majority of the AI stuff that is done is done in the cloud. Right. That's where you get compute and storage.
[00:52:12] Speaker B: Yeah, and let's get into that. Actually, AI for creative editing, generative AI tools like Firefly for video and Premiere Pro are catching attention.
Even that clip that we were playing out, I used audio enhance, right. To export it and it turned my voice into this beautiful podcast voice.
Anyways, color correction, transcription are arguably having a bigger impact on the editor's productivity today. Michael, as someone who works closely with editors, what are you hearing about the AI tools that are generally useful versus those that might just be a little overhyped right now?
[00:52:48] Speaker F: There's a ton of AI grifters out there. There just is.
And one thing that a lot of folks have to remember, if we remember back to remember when Final Cut was killed and Final Cut X came out, right, people conflated the launch of the product with the actual product. Right. The product was great for what it did, but the launch is what soured people. And when we look at AI, although the marketing hype has been around generative AI, the soras, the, the, the image and video and audio generators that are creating something out of nothing or utilizing, let's say, influence from things it was trained on, either illegally or illegally. But the thing that really is going to change our industry and already is, and more people need to jump on it, is the concept of analy AI. Analytical AI is taking content that's already been created and generating meaningful metadata from that. So that would be reading a script and summarizing it. It would be looking at an hour long episode of a TV show and knowing who said what and if they were happy, if they were sad, if they were speaking a different language, scene detection, those types of things. And as any editor can attest to, you spend a lot of your time looking for appropriate B roll or going to stockhouses and finding content, or finding that perfect music track, which takes forever. You're never done, you just hit a deadline. And so analytical AI is what allows us to search and find and sift content, or discover content, as it's more commonly called, to find that content for editing. And that's where we're seeing a lot of play right now. In fact, all of the major NLEs all have some form of analytical AI built in, whether it's editing via text or transcription, or Adobe, who's just been leading the way with integrating the gen AI of Firefly, but also putting in text, speech to text, and text editing tools directly within the editor.
[00:54:54] Speaker B: Yeah, Mike Cavanaugh, what are your thoughts? What do you think is kind of the biggest roadblock to adoption. Is it just that everyone's still stuck on media composer version 8, or is it just that people have ethical concerns or it just doesn't look that good and we're specific specifically talking about the NLE side of things. We'll get into media asset management later. What's keeping people from running the AI and editing the same way we run to ChatGPT to ask a quick question.
[00:55:25] Speaker E: I think right now it's really a matter of adoption and figuring out where things are. I mean, there's a lot of things going behind the scenes of AI integration where the studios won't put it on paper, but they're looking and saying, how can we integrate together localization faster? How can we. Nvidia's Maxine technology allows for my voice to go through multiple different languages sounding like me, which is insanely crazy. But that's not going to hit Hollywood immediately. I mean, there are a lot of contracts and everything else, and. And we actually see, you know, the worship side probably being the first market for adoption. You know, used to be important. Now it's worship. That's just one of those things, I guess, that are happening and it's exciting because people are looking at multiple languages and getting that out. But I think it's really a matter of adoption. And one of the things we're really looking at where we're going to have Michael Kamas, and this is news to him, start teaching some classes at our key code education school about AI and integration of those tools and how people can use them to be more productive. I think people right now scared. They are scared about what's the impact on the job. You know, as we just discussed, the level amount of editors working right now is substantially lower than it's been in years. And a lot of people want to be in the industry, but they're also very nervous about what's going to be the impact of AI on my job as a startup level, from like, say, becoming an assistant editor when AI can now do 80% of that job.
[00:57:00] Speaker F: Mike, I'm going to disagree slightly. And what I mean by that is, for the past 40, 50, 100 years, the only way to be involved with mass media was film or television, right? So you had to adhere to whatever the standards were, whatever the personnel, whatever the infrastructure was to do that. And we've seen over the past several years that the number one job that young Americans want is to be an influencer. And that is completely outside the traditional Hollywood, film, New York film and television world. So they no longer have to be worried about.
I got to learn how to use Avid, I got to learn how to use Navid, Nexus and Bin Sharing. And I have to remember, have to understand VFX handoffs, should you because as you get bigger and you know, whether it's Emine or as an influencer, do you have to know that to. To work with larger groups? Yes, but you don't have to follow the, the neon arrow that we have for film and television. I think that changes a lot of workflows. I think as you can attest to, we often consult with clients who are startups and have those small podcast studios, but we also work with ones that have the most followers on YouTube and we find that as they add more storage and more stories and more viewers and more subs, they need to rise. And what we find is that as much as the up and comers want to change the world in the way filmmaking is done, when it comes to, excuse me, when how media making is done, when we get back to working in groups collaboratively, you still have to kind of follow how things are kind of done because we've all had the slings and arrows prior and this procedure may seem archaic, but it's what's needed when you're working with a lot of different people.
[00:58:55] Speaker E: Totally agree. I mean some of our top clients this year are influencers as compared to the traditional post production studio facility or broadcaster. And things are changing where anyone can communicate and get things out there. And I think AI is going to be a huge enabler allowing people to have, you know, basic stuff, image tracking, stabilization, image capture, just data analysis, localization. I think we are in a very exciting time on how people can get their content out to market and this next two years is going to be super exciting on the evolution around it, specifically in post production.
[00:59:35] Speaker B: Awesome. Well, that was a great conversation. I like how it also evolved to just how post production in Hollywood might be slowing down, but boy is it booming for corporations or YouTubers. It's really going, it's exploding in those areas and those people are a little less restrictive and, and the requirements they have to follow so they can use these AI tools and use these newer solutions. I'm going to transition to AI for media management. We kind of separated this one out because I know we've done a lot of work on media management. This year AI has proven to be a game changer for media management. Collecting that metadata asset searchability, particularly for mid to large teams on prem solutions for companies like Periphery scalelogic are Gaining traction while cloud based options for aws, Wasabi and others continue to evolve. Jeff Singfil Having implemented an on prem AI media management this summer, can you share your insight and where these tools are today? Are they ready for the prime time? What should teams expect when integrating AI for media management? Is it as simple as turning the AI box on or is there a lot more involved? Like traditionally setting up a media management.
[01:00:53] Speaker A: There'S a lot more involved. You can't just turn it on and all of a sudden get all this wonderful goodness.
The thing we've discovered is there are some boxes out there that have that in mind and you turn them on and you can get the wonderful goodness. It's just going to be next week or maybe the week after. Just takes a while for it to process content. And the other thing we discovered is some of the boxes that do produce do do this very quickly. You've got to have someone with a deep programming knowledge in order to, to make it behave itself.
And the other thing also we found out and we kind of did this on purpose. We, we had rolled out an AI system with it was an older computer we had around and a fairly decent GPU that we also had still had around to see what happens with, you know, someone who comes along and says give me the software, I'll just run it on what I have.
No, for the love of God, no.
It hurts.
It's extremely painful.
And a lot of the AI implementations are geared toward the most modern hardware. So even something that was $20,000 GPU from 2019, the new stuff is leaving it in the dust. So the are we ready for prime time? Yes, probably in the cloud and Mike may have other thoughts on this as well, but not quite really on prem I think in any way that's quick to get you actionable results.
[01:02:31] Speaker E: Well, to dye it a little further, we went from there and I basically told the tech team at Key Code Media, I want the last two years of every broadcast to post ingested in. I want to hear, I want to see summarization, I want to see transcription, I want to see translation, I want us to play with image identification, I want to see facial recognition. And they were like, oh God, Mike's making us do this again. But you know, I'm really leaning into AI. I mean it's going to be utterly transformational. And you know, because we have all this content we've been producing over the last, you know, 10 years, we have a lot of information, a lot of data that we're able to test and be able to provide our clients guidance on how this can fit, where it is, what are the gaps? I mean, one of the big gaps we're seeing is all these different AI providers are grabbing a lot of freemium code and trying to drop that into their model. And a lot of it's still, you know, literally Linux based line coding to make changes. And I think that level is going to change very rapidly. And we are aggressively driving everything we're doing and getting our hands dirty on figuring out how well is this working and testing it on our content so we can have communications with our clients about what is the meaningful level of what AI can do, how it's working and how we make it work and be able to show real time examples and then be able to work with them on their footage to be able to better understand their library and remonetizing it.
[01:03:59] Speaker A: And the thing there is having that directive here is the business case that we would like to do. That's the perfect thing. You really need to start with that. Because AI, when you get into it at the beginning, it's this panacea. Oh, it does all those things. And no, you don't need to necessarily do all those things. There are certain things you need. So like Mike said, transcription, translation, summarization, face and object identification, those were critical to what we wanted to do. And that was a business process set. The thing is, AI does a heck of a lot and like the results we were getting from our systems, we've been plugging those into asset management with Iconic, which is, you know, cloud based, because it's a nice easy fit and we can scale that up as we need to.
The thing there is you're able to generate a lot more metadata than you necessarily need at this point in your process. Either business metadata, production metadata or archive metadata. But throwing away metadata, you will regret that later.
Keep it. And that's one of the things we're trying to work on, is metadata portability. Because your asset management system that you bought five years ago may not understand all the cool stuff your AI tools can deliver to it, but something else is going to come along a little later and be able to do that. Or your asset management system may end up with a software update and suddenly, oh, look, all the new hotness just came online.
So that's one of the other takeaways we've had. One, don't build with all the old hardware and two, don't dispose of metadata. And there's ways to turn that into JSON files and JSON files are just sidecar files and you can put those on Scottish.
[01:05:45] Speaker E: Our good friend Bryson Jones and North Shore Automation said it in one of our panels we had. You know, when you started asset management, you started with great hopes of maybe creating 20 fields to tag. And then after year one they're down to seven, year two they're down to three. And really how we're viewing it at Key Code Media is the next generation of asset management. You don't know what you're going to need to tag to view in the future. But AI now can solve that problem for you instantly by harvesting that specific need, that specific search and being able to pull and cull that information incredibly quickly.
[01:06:20] Speaker F: But I'd like to add briefly to that. Matt, I know we have know we have to move on. A couple things that we've learned from years or the last year of working with clients are a few key things I think I'd like to share. Mike kind of alluded to it, is that the technology of asset management that no one really wanted to use because you had to do labor before you were a creative. We have to go back to that, we have to go back to asset management. But now the, the fields are being done by AI, so that's what we have to return to. But that also means we need an asset management system that can handle that. And a lot of legacy asset management systems don't support clip or time based metadata. They don't understand what happens from moment to moment, just that something happened that we were looking for in that clip. So you'll as a client need to have an asset management system can do that. Also keep in mind that what model you're using, is your model censored? Is it being done filtered through a larger corporate entity that may be getting rid of things that you want to search for. Also, if you're using third party APIs, 12 labs is a very big one. Unless you're using OpenAI or Anthropic. How are their models built? What are they trained on? And lastly, keep in mind that all models are pretty much a black box. If you buy an AGA or blackmagic card, you know what the ins and outs are, you know what the specs are. When you get a model, you don't know all the data it was trained on. You don't know what biases, you don't know what censorship's been done. So you really need to do due diligence to make sure that the model you're using is actually appropriate for the task you're trying to accomplish.
[01:07:55] Speaker A: Yeah. And there's a lot of pointers there, Michael.
[01:07:57] Speaker B: Just a follow up on that one. Michael. I think we thoroughly covered roadblocks. I think it's really important to talk about the challenges with on prem versus if you can access the cloud, how much that makes it a little bit easier.
We talked a lot about how the setup is going to. It's not just non switched. It is kind of like asset management where you're building out something that's going to take time. Just like building out a database. Michael. Maybe just kind of end it on some of the coolest things you've seen with AI and asset management. We talked about face detection and transcripts. What are the coolest implementations you seen in in putting AI into asset management?
[01:08:38] Speaker F: AI into asset management first off is just using a third party AI provider, but that's kind of the the lowest bar to enter. Right. I like what Iconic is doing. They'll have more announcements next year about analytical AI. As was alluded to earlier, the products like Carol 1 from Scalelogic is really in its infancy but it has a lot of promise.
We also have things like AI plus from Periphery, which is one of the first solutions in M and E that is doing the chain of thought right. Why use a massive model to look at just the text you put in when you don't need something, you don't need the modal of that multimodal model to look at that particular part of data. You just need someone that understands text. So doing the chain of thought to use the most appropriate and cost appropriate, cost effective model for that one task is what we'll be seeing moving on. There's also a ground, a kind of grassroots application that I fell in love with over the past year. It's called Jumper. Get Jumper. Take a look at it. It's great. It works inside Premiere and will do a lot of your analytical AI before Adobe ends up rolling theirs fully out in the upcoming months.
[01:09:55] Speaker B: Very cool. All right, well moving on to our final topic before we get to lightning round is remote editing and cloud. So remote and cloud based editing tools continue to mature in 2024. Solutions like LucidLink enable teams to collaborate with seasonal freelancers. We have a great customer example of how they're using that while Teradici.
I don't think it's called Teradici. More Jump Desktop sns. Thank you AJ Anywhere. Yeah. Facilis fastcache provides secure remote workflows for On Prem Media. Michael Kamas what new innovations are creative teams adopting in remote editing workflows this year that may be different from previous years.
[01:10:37] Speaker F: Look, I love LucidLink. I've loved LucidLink for years. I thought for a professional, the pricing was great, there were a few gotchas, but it worked and it was deployed at literally thousands of clients. But what we've seen is that everyone else has realized that's a pretty good idea. So we're going to see next year, especially a lot of other technologies that are similar to LucidLink. Right? Whether it's what Hedge is coming out with, whether it's what media media anywhere unveiled. There are a bunch of other companies that are doing things that we can't legally talk about right now, but that are going to do Lucid Link type things in this coming year. What that also has meant is that there had to be innovation on how to use cloud storage.
[01:11:22] Speaker E: Right?
[01:11:22] Speaker F: Cloud storage usually came in two or three different flavors. Your glacier, your cold storage, your object storage, which was good for light stuff. Then there was block storage, which was expensive but was great for video. We're now seeing companies that are saying, look, we can take object storage, which is usually difficult to work with, and put some management on top of it, so we can use lower cost storage for cloud editing. And that's being used not only by companies like Lucid Link, but also companies like Create, which was formerly bebop, or what editshare is doing with their Flex product, which is putting their file system on top of cloud storage. So we're seeing more and more ways to make the cloud less expensive and thus more usable by media teams.
[01:12:13] Speaker B: Very cool. Mike Kavanaugh, do you want to add anything there? I know we have a great corporate client down in Irvine that really has a great use case on how they're able to flex up and down their team. If you want to share that or any other insight on how you've seen customers using remote technology for post.
[01:12:32] Speaker E: Well, I think the biggest part of it is really a consumptive base model where in the cloud you're paying for what you use. Although a lot of the manufacturers are really trying to rope clients into longer term contracts, say a one year or three year contract, which does lower your usage costs, but it doesn't allow you the flexibility of quickly ramping up and quickly ramping down. I mean, Avid, you can rent the Media Composer in the Avid cloud for a week and run it up, run it down, expand. And so there's a lot of things going on. You know, Adobe, you know, driving through Lucid Link has just been, you know, Compelling. And it's transformed people where they no longer have to have on Prem storage. But there's a cost to that where if you do the math and you say I'm looking at getting an SNS, facilis, quantum avid, etc. And having it on Prem and setting up a remote workflow, your costs are less than the same comparable level of cost where you're looking, you know, lucid link could be $80 per month per terabyte. And that's, you know, looks up very, very quickly and you're paying that for the rest of eternity as compared to the traditional model of buying a fixed amount and maybe scaling from there. From an economic perspective, they're just different ways where the cloud gives you incredible levels of flexibility, but at the same time as you're scaling up, it also becomes very expensive. Where the CFOs start wondering why are we paying all this money in this one field, the creatives say, well, we're able to do so much more and we're much more flexible. But that's really been the trade off that's going on from an economic standpoint.
[01:14:16] Speaker F: Hey Mike, I think there's one slight difference there. And I agree with you on the corporate market where there's production work being done for internal comms or marketing 365 days a year. But when we get to project based, whether like television for example, or even video episodes on YouTube or a VOD platform, you don't want the liability of that technology. What do you do with it after the show is over or it's not picked up for another season? And so when you get into cloud editing, that becomes the real awesome use case because now you can say, look, I need it for three months.
The cloud editing providers have, many of them have said, look, you get charged one price and you get all this underneath it up to X amount of hours. And if you need more, okay, we do overages. But it doesn't become nickel and dimed of every hour. And that is especially attractive to start and stop productions where you don't want the, the technical liability, you don't want the support liability of it, you don't want to own it. After the production's over, I can spin it up in the cloud and be ready to go in a day.
[01:15:21] Speaker E: I totally agree. I mean, and really when you look at it from an economic standpoint, that's basically called reduction of barriers to entry where you don't need $50,000 to start, you can literally start your production, ramp it up if the Pilot gets greenlit, you can Continue, do your 12 episodes, ramp it down, you're done. Just like the old world of rental, poof, it's gone. Except you're not moving all this equipment into a facility or someone's home. So the flexibility there and the barrier of entry is just radically lower for people to create content.
[01:15:58] Speaker B: Jeff, anything you want to add there as well?
[01:16:00] Speaker F: Sure.
[01:16:01] Speaker A: I mean that's a rental model we are trying to get our heads around because it really is all about support at that point. And one of the things that was always a big problem at the end of a show is everybody finishes, they're all gig workers, they walk off and you left with a box of stuff. There's LTOs and there's disks and there's all sorts of wonderful stuff and no one's going to take your calls because they don't care about it. In the cloud. You can hand over that custody of that, that of that data to someone else or they still have the originals because they've done the upload into the, the cloud provider area.
So it makes it a lot easier to close these things out because, you know, I used to have those walkthroughs in the vaults back when I was in facilities and people would say, well, what are you going to do with this? I don't know, let's see who we can call. We don't have to worry about that anymore.
[01:16:59] Speaker B: All right, well, I think we're going to end, go into our product lightning round.
Gonna put my toad head on. So let's get started with the best or most exciting products that you have all seen in the post production space this year. Michael, let's go ahead and start with you.
[01:17:25] Speaker F: I've got two. I can't decide between them.
When you're working in the cloud, Review and approve. Especially live review and approve can be really expensive, but I've kind of fallen in love with Looper L O U P E R and that's a live review and approved platform that is like Zoom for creatives and it's low cost and it's fantastic. And as I mentioned earlier, Jumper for Adobe, Adobe Premiere is fantastic as well.
[01:17:57] Speaker B: Very cool. Jeff, how about yourself?
[01:18:00] Speaker A: I'm going to flip from where I was in Broadcast. The ultra high end Barco's Steered laser projector. HDR for color grading rooms. They introduced it at Cinemacon this last March to the wider audience. A lot of people have already seen it in different spots, different previews in Hollywood. It is awesome. It will not only change the way people do code writing for theatrical, but it's going to change the entire theatrical experience. So we have something that's going to completely modify exhibition and allow a whole different realm of storytelling. Because HDR in a theatrical environment is extraordinarily expensive and difficult. This brings down the cost of entry, even though it's not a cheap projector. But it's going to be an awesome tech change.
[01:18:55] Speaker D: I have one, Matt, but it's Steve.
[01:18:58] Speaker B: Coming in the post conversation. Come on, Steve.
[01:19:02] Speaker F: Water's growing over here.
[01:19:03] Speaker D: Steve, I'm Jonan Lass here. It's actually SMPTE for the adoption of the 2021-41 substandard that deals with fast metadata. Because now we've got an ancillary channel that's going to feed lots of good information into all the post production software to keep track of how the, how it was shot, what audio was going on and how you were capturing that audio. All that kind of good information that will help help the post guys do a better job of editing and producing the final content.
[01:19:40] Speaker B: All right, Mike Cavanaugh, I got a.
[01:19:42] Speaker E: Couple, but because I write the checks, I got to say it number one, Adobe, Adobe's integration of Premiere with Firefly for AI, I think is just scratching the surface. And it's, it's going to radically change how editors, specifically in the Premiere world, you know, leverage AI for creating better content.
You know, an oldie but goodie. Avid finally came out with full round trip, cut, mix cut integration between Media Composer and Pro Tools. And I think, you know, put it in there in productions where audio is critical. But I still remember from 1995 being cussed out by Michael Bay when he was cutting the rock where he wanted to go back and forth between Audiovision and Media Composer, which the technology just wasn't there, even though we knew who he was.
Lastly, I like what Avid's doing with Huddle. Integration for teams, I think is really slick. And my last pitch, which we're big supporters on, is storage GNA's fabric, which effectively is data unification across archive, cloud and local storage, giving you a visibility and a dashboard of really understanding where your media is and the costs that you have around it.
[01:20:54] Speaker A: That's that metadata portability I was talking about.
[01:20:57] Speaker B: I'm going to need to create a sound effect for storage DNA as well when we bring Mike Kavanaugh. These calls. So cool. All right.
[01:21:04] Speaker A: One of the things still out there, Open Timeline IO. Open Timeline IO has come a long way. That's where you save an Avid project and then you open it up in Adobe Premiere or you open it up in Resolve. So they've worked that piece out. The Open Timeline IO group is in place there to help facilitate cross software collaboration.
[01:21:31] Speaker B: I'm going to jump in here too because just. Because I know we talked about the pre call and it didn't get enough mention, but I just found those elgato stream decks to be everywhere at nab. And this isn't just post. I mean we went to Michael Klein's house this year in one of our videos. You can check that on YouTube where he's using it for post and he's just Hotkeys has these LED hotkeys he set up for his post workflow. But then you walk NAB and you go over to visit Booth and they're showing the new tricaster and they have all these macros set up to just make their live production easier. And so it just seems like a very useful small item that I'm just going to to say that everyone needs to have at their desks and they're going to come up with different ways to hotkey things to turn on and cut this and cut that. Just much quicker than trying to do it from a keyboard.
[01:22:22] Speaker F: Matt, I'm elaborate. I've been using a Loupedeck CT for years and this software was buggy and now the Logitech owns them, it's even more buggy. And I bit the bullet and bought a Stream Deck XL and it's. I almost don't use the keyboard. I have a hand on the mouse and I have a hand on my stream and it's fantastic.
[01:22:40] Speaker D: I'm.
[01:22:41] Speaker F: I wish I had changed over several years ago.
[01:22:43] Speaker G: Yeah. If you like Stream decks, you'll love the Quick Link control service because for the new switchers they mil. They built in a giant control surface that's just a whole bunch of stream decks all glued together.
It's great.
And yeah, I've been putting stream decks in every project I have and everybody's like, oh, I can just fire all my macros and all of this control. Five different things that normally have no business talking to each other from one button. Great.
[01:23:12] Speaker A: And hopefully, hopefully everybody's got a Like and subscribe, you know, automation in there. If you don't, this is the time to do it on the YouTube stream. Like and subscribe.
[01:23:22] Speaker D: Keycode media.
[01:23:23] Speaker G: Yeah.
[01:23:24] Speaker B: And I see Barry just commented. He said Barry Ghosh is on here from Aja. He said the Helo plus support works the stream deck as well. Little plug from Barry.
All right, great. So now I'm just looking at a lot of comments. Great insight, great insight. It just seems there was some questions as far as uncompressed workflows that I think we'll have to follow up with. But thank you all. I think we're going to kind of wrap it up from here. Before we do that, we want to just thank everyone who stuck it out this year watching Broadcast to post either live on YouTube, on Spotify, Apple Podcasts or a YouTube channel later on. We had a great year of interviews. We interviewed Fred Vogler, the front of house at Hollywood bowl talking about live sound. We had Denver 7 talking about their 2110 newsroom buildup from the ground up. We had Carl Sulade giving a preview of Firefly and Premiere Pro 25 before it even came out.
There are 12 fresh episodes of broadcast posts just like this that we produce every year and we're going to be doing 12 more next year. So go to our YouTube channel, go to the Playlist Broadcast post, or find us on Apple or Spotify. A lot of great work went into this. I want to shout out to our moderators, Steve, Jeff, you guys have been doing a great job. Andy, our editor and content manager in the background and Chase JJ coming into TD and throw the show and make sure they come in super early every morning just to make sure all the assets are lined up and ready to go. I think I do have an applause soundtrack and we'll applies there. All right, so that's the plug for the podcast and I hope you've enjoyed it throughout the year. If you need to get the conversation started on your next project or you're trying to figure out what products you need to budget for your broadcast or post production or even AV infrastructure, please contact
[email protected] our rapid response team will be very quick to respond to you and get you exactly in the right direction for buying the right things at the right price. That makes sense for your team. So I want to thank everyone. Thank you everyone for joining today. We had a big group of people, Michael Kammes, Mike Kavanaugh as well. I haven't mentioned you as well as everyone in the control room. Thank you so much.
[01:25:49] Speaker A: Thanks for watching Broadcast to post. Please make sure to subscribe to the podcast to receive Future episodes. Follow keycodemedia on LinkedIn, Twitter, Facebook or Instagram to receive news on additional AV broadcast and post production technology content. See you next time folks.