Superman

Discover why the post-credit scene used to be IN the movie, how an “embedded sound person” helped deliver more than just sound, and why audiences needed a certain pace to discover a “new” Superman.


Today on Art of the Cut, we speak with the editors and the “embedded Avid sound mixer” on Superman. William Hoy, ACE, and Craig Alpert, ACE, edited the film. Ian Chase worked as a liaison mixer and sound editor between the picture cutting team and the sound design and mixing team.

William’s been on Art of the Cut War for The Planet Of The Apes, The Batman, and The Call of the Wild. His other credits include Dawn of the Planet of the Apes, the just-released Fantastic Four movie, I, Robot, 300, and Dances with Wolves.

Craig has also been on Art of the Cut before for Borat Subsequent Moviefilm and Deadpool 2. He was nominated for an ACE Eddie for both of those films. He also edited Popstar: Never Stop Never Stopping, and Pineapple Express.

Ian Chase has been on the sound teams of Everything, Everywhere, All at Once, The Fantastic Four, Avatar: Fire and Ash, and Guardians of the Galaxy: Volume 3. He was nominated for a Golden Reel Award for She Hulk.

Gentlemen, thank you so much for being on Art of the Cut. I went to go see Superman the other night, and the audience was completely on board with the movie and loved it. When you’re dealing with a movie that has so much VFX, talk to me about the evolution of a particular scene and how you start cutting it when all you have are maybe plates or green screen or something, and how that evolves.

HOY: Let’s start with the River Pi. It’s a very colorful kind of psychedelic scene ultimately, but what we started with was just a shot list from [director James Gunn] and very rudimentary storyboards. The idea is to follow that to start, because a lot of the action that’s shot on blue screen, you just wouldn’t know how to put together.

We’re revolving around Superman, and then where does that go? Because the slates don’t relate to anything and it doesn’t tell you what shot follows what, so you actually have to decipher what shot belongs to each storyboard and follow the storyboard, then set the timing for yourself.

I actually looked at the first cut of the River Pi without any visual effects in it. And the difference between that and the very finished product was just a few seconds, which is amazing to me because you’re usually leaving shots longer so that if something looks great in the visual effects, you’re gonna keep that piece.

You can’t really make it longer once you’ve actually cut it, because the visual effects people already budgeted the shot.

When Superman was in the river, it looked like a bunch of styrofoam balls. So he is bobbing up and down in these styrofoam balls and everybody’s on wires.

So you really have to use your imagination to put that together. But the storyboards do help. If it wasn’t for those storyboards, then it’d be a little bit more difficult to put together, especially in some of the wider shots.

When I said I was that the first cut of the River Pi was close to the final, some shots end up longer, some shots end up shorter, obviously.

So you see some shots and say, “Wow! We should stay on that shot longer” for sure. The visual effects people were really good on this picture. They understood that, so they scanned it longer, meaning that they gave us longer shots than what we actually gave as a template.

If I gave them three feet, they would give me a lot more on each end if it was a shot that they envisioned was gonna be a cool shot, so then if - once I saw a temp - I could say, “Let’s extend this shot.” But also, if this other shot was kind of self-explanatory, I could make it shorter.

Even before they start compositing the shot and doing the real thing they give us post-vis, which is one step past previs where they actually take Superman’s blue screen, create all the elements around him, and then at that point you still have the opportunity to extend or reduce the length of the shot without really causing too much damage to visual effects.

Ian Chase

Ian, with a scene like that at such an early stage, are you putting sound effects in to help give it some life?

CHASE: One of the great things about working with James is he really wants the sound to be the best that it can. This is where having a relationship with [Supervising Sound Editor] Katy Wood and [supervising sound designer] Dave Acord really comes into play.

Dave has worked on so many of James’s films that he really has a great understanding of where we should go before we started on the movie.

Dave had read through the script and identified certain elements that he wanted to do that we shouldn’t even touch. In this case, Dave did all the river by himself and sent three stereo tracks and they sounded amazing. They went to Matt, the assistant, and then to Bill for mixing.

One of the cool things about having somebody in the picture cut rooms on Media Composer is that I realized Dave’s in a 7.1 room.

We had put all three tracks originally in the front and I realized we can put one stereo track in the left and right, another stereo track in the side and the third stereo track in the rears and suddenly that opens the whole river sequence ‘cause there’re waves going by.

So you hear the wave go over and then there’s lasers, um, and you can start to send things to the sub a little bit more carefully.

So having somebody there in Media Composer who can interpret Dave’s sound design - even if he had just left it in the front, it would’ve been great- but to kind of move it around…

So it’s a collaboration. Dave does all the big stuff. Then if there’s something simple like a door close or some ambiences, then I would cut that in.

Originally there were drawings James had done of Krypto and an arrow where he’d be say, “Krypto’s gonna go this way.” So we would put a whoosh. I would always talk to the visual effects editors.

They had a lot of previs so they could show me, “Krypto’s gonna have a collar. Krypto’s gonna have a cape,” so we can add some of those finer details even at the storyboard phase. Then as the shots get finalized, we put better sound in.

CHASE: Stephen, one thing to mention is that Bill and I were working in six track sound format.

HOY: We had six speakers in our room and normally most shows work in stereo. I know a lot of shows I’ve done, we’ve worked in stereo. There’s only a few that we’ve gone six tracks. So there’s just a bit more for all of us to balance sound-wise as we’re editing.

William, did you like working in 5.1?

HOY: That’s actually how I normally work. I tweak my room according to what the theaters are at the studio I’m at so that I can play my Avid tracks out for the first few audience screenings. We went to temp mix after maybe five screenings, six screenings, maybe more.

Ian’s being a little modest here, he actually helped a lot in the beginning because as editor picture editors we can find sound effects to put it in, but that takes a lot of time. We’re picture editors, so we need to turn our attention to the picture.

So, Ian helped a lot in that. He even gave us some Foley. Ian gave us a good template to go into the later mix too, and even into the final mix because there’s a lot of transitional sounds that he put in there.

It was great to have Ian there because it allowed Craig and me to concentrate on the picture more because there was a lot to do picture wise on a show like this.

Craig Alpert, ACE

ALPERT: Actually it felt like it was essential to work in this sound format. I normally work in two track stereo, then when we screen we just flip on the Dolby encoder in the theater and let it create a false “Pro Logic.”

But in Superman there’re just very specific sound design and elements that I think were really enhanced by this. Working in six track and going out to screenings helped us a lot.

Craig, in some of those VFX sequences, did having the sound effects sometimes just helps you feel like you can stay in the shot longer, or did you feel like it gave you a better sense of the rhythm of the scene to have a more advanced soundtrack at an early stage?

ALPERT: Oh, absolutely. Even once the additional sound design came in towards the end, we ended up extending shots and holding on things because as you’re cutting it I don’t really know exactly how everything’s gonna sound.

So at least while Ian was going through adding the sounds, absolutely it did affect the length of the shot.

HOY: When we start out in editorial we pack it all up with music, and pack it all up with sound effects because we don’t have the picture yet. So once the visual effects gets in there, we can then let the pictures speak for itself. So we have to begin to start to pull things out: take the music down, take some of these sound effects down, or play the dialogue more.

So once we start getting visual effects, then we begin to understand, “Oh, this is the story here, so it helps a lot that way.

But in the very beginning when it’s all blue screen, let the music tell us what’s going on. ‘cause I don’t know what’s going on because we have Superman flipping around in blue screen and what is going on here?

Avid screenshot of full timeline

So once the story reviews itself - when the visual effects start coming in - then we start to pull back and in the final we also pull back on some of the music and at least vary the sound effects to let everything have its moment.

I loved the storyline with the parents, and some of the emotional beats. Can you talk to me about editing Superman, going back home and the time that he spent with his parents?

HOY: When Clark is speaking to his dad outside, that scene is centered around David Corenswet’s closeup. He is so real and so genuine saying, “I’m not the person I thought I was.” That’s the most important moment to get to from dad coming out trying to make small talk.

You kind of see the relationship. It’s not the closest for some reason, you’re not sure why. That scene was a bit of an evolution too, because one of the actors who had to be tweaked a little bit - not talking about David Corenswet - but ultimately it’s amazing. He comes off great, but it’s based on the groundedness David’s performance. He just hit that moment.

Was that primarily built from a single take?

HOY: It was a single take on David Corenswet. As actors they give you different takes. They give you variations, but when they’re on on one take, that’s the take.

You try to intercut it with another take. Sometimes it is not gonna work. I’m not talking about the over-shoulders, but I’m just talking about his closeup.

William Hoy, ACE

Do you ever work where you choose the heart of a moment and you cut that in and work in either direction from that, or are you always working linearly?

HOY: Personally, that’s what I do. If I have to work inside out, that’s great. When you watch the dailies of a scene, I know that’s the moment I have to get to.

Regardless of continuity, regardless of what is, I have to get to that moment because that’s the moment that’s the story of this particular scene. I have to get that moment.

So it is working inside out. Sometimes you are looking at a scene and maybe it’s not so self-explanatory, so I start in the beginning, but you discover it later. But a lot of times you see it in dailies. Craig, would you agree with that?

ALPERT: Oh yeah, absolutely. Sometimes it’s a jigsaw puzzle to get to it -  to try to work your way to that moment.

Did you have a scene like that that you can talk about? A specific example?

ALPERT: The apartment scene is a big scene between Lois and Clark. It sort of has everything in it. The success of the comedic moments in that scene hinged all on timing, rhythm, and their reactions. That scene was probably 12 minutes.

Early on that seemed to be one of James’s favorite scenes. We all sort of spent a lot of time finessing it.

Post crew

HOY: I know that there was talk of cutting it down, but I know that Craig tried that every now and then, but the scene had a certain flow to it, so once you took a certain section out and tried to cut it down, it lost this emotional buildup and it just - I saw a shorter version of it -  “That doesn’t work. We gotta put back in.”

I know that a lot of people get concerned about a dialogue scene in a movie like this that goes for 11, 12 minutes. Is that too long?

Are people going to like that? But people have strongly responded to it. They love that scene and the critics find so much character and what the characters are all about. I’m glad we didn’t lose any more lines. James didn’t wanna lose any more lines.

ALPERT: It did sort of rubberband back and forth and it ultimately ended up basically how it should have been. The two of them are so good in it.

During screenings I would watch it just to see how I felt and think, “That is a good scene. I don’t want to cut it. I just don’t want to cut it down.”

Ian, with a scene like that, what kind of character are you trying to add or what kind of help are you trying to do with sound?

CHASE: James really wants to elevate the stories and the characters, and I think that that’s where a scene like this comes from.

What’s great about these quieter moments is you can really enhance a lot of the character details through great Foley and great dialogue editing, and kind of let the music take a back seat for a second and play up some of the ambiences, do some world building.

Katie Wood had done a pass in that scene because it was one of James’ favorite sequences.

So she had gone through and really cleaned up some of the ticks and pops and clicks and that really opened it up to cut a lot of really beautiful Foley, so as Superman’s sort of uncomfortable and shifting, we hear cloth and the movement of the couch.

Obviously they’re interacting with the recorder, so we did a lot of recording with that recorder. Adding buttons and clicks, adding little footsteps as he’s moving around the room so you get a sense of his pacing and really just trying to support and elevate the performances on screen because - like Bill said - they’re such great actors and the editing is so good.

The directing is so good that really sometimes less is more. With that sequence, the name of the game is: what’s the least amount of sound that we need to tell the story? As long as we’re focused on that and elevating the story we’re doing a good job. That was the goal.

Is there any sonic world outside of the apartment?

CHASE: It’s very slight. James brought everything down so we could stay with the intimacy of the scene.

HOY: I remember asking James, “What floor are they on?” He says, “Third floor, I think.” What do we hear on third floor?

Actually we’re not hearing a whole lot because even a car-by or a bus-by interrupts the story, so it’s really played down there. I think one of the sounds that was played up was the recorder sound. James wanted it really loud, so when he pressed the button, so you are very aware that he’s being recorded.

It doesn’t go beep. It goes BEEP! I asked, “Is that too loud?” James said, “No, I’m want it that loud.” So I said, “Okay, fine.”

What’s in the mix does not necessarily have to be what is “true.” Like maybe it will be quieter on the 14th floor than third, but how important is the emotion of that scene, and do you really need that extra audio support?

HOY: That’s right. I think if you watch it, you don’t miss it.

I wanted to ask because sometimes there’s all kinds of great stuff that you don’t consciously hear in a scene like that.

CHASE: It’s all very subtle. How clean the dialogue is really sucks you into their conversation and you’re focusing on them the whole time.

HOY: When people think about sound, if it’s big and splashy, that’s something, but crisp, clear dialogue, that’s a craft in itself. I think that scene speaks for itself that way.

What about some of the scenes at the Daily Planet - the pacing of them. Some of those scenes are a little bit more comedic. They’ve got some nice snappy banter. You’ve got a lot of stuff going on in some of those scenes.

HOY: When James shot the Daily Planet he was a little worried that he didn’t have enough wide shots to show what the interior of the Daily Planey looked like, so he went back a second day and got that.

I actually had my assistants break all the lines down so that I could see where all the lines fell on the characters because they were all roving cameras.

They weren’t sitting there in an over shoulder. They continually move, so you have to find their position. And as actors, they were free to move a little bit farther than some of their marks, so that you have to keep the continuity in some way.

So I had my assistant break the scenes down into the lines so that I could go through to know Lois Lane says this line in all these shots. So that’s how I began to put it together so that I knew that I wanted to be in a wide shot in this line, or closer shot.

That’s kind of the basic way that I put it together, because just looking at what was coming in dailies - we’re all editors, sometimes you get overwhelmed! You just say, “What am I gonna do here? How long is it gonna take for me to put the scene together?”

There are different ways to put scenes together. Like the scene between Paul and Clark Kent, that’s a much simpler thing. So it doesn’t need to be broken down that way. But the Daily Planet scene is a massive scene.

The next day he shot a pickup of Jimmy Olson saying something, so it’d have to be incorporated into all this breakdown of the dialogue.

There’s a lot of funny stuff in there, so you wanna make sure you get those lines and you get the reactions, you get those little things the way Lois is looking at Clark and how Jimmy’s interaction with the girls for example. So that was the initial approach.

Then Craig addressed some of the notes that James had. There’s a lot of complicated blocking in that scene too: Walking, then moving around in the chairs, then you have Steve Lombard. There’re a lot of comedic moments with him that have to be nuanced. Maybe when you watch it, it doesn’t appear like it’s a complicated scene.

I thought it looked complicated, which is why I wanted to talk about it!

ALPERT: Bill and I would pass scenes back and forth. Bill would take a pass. I would take a pass. It was a really interesting, amazing way to work - to not specifically divide the film up. For both of us to stick our hands in everything.

It sounds like Bill started that scene. How do you revise that if you weren’t the one that did all that initial work?

ALPERT: I use this tool in the Avid called the Script Tool. It’s not used a lot, but it helped a great deal for me on this movie to have everything laid out in script format. I would go through and watch all the dailies then mark my selects in the Script Tool.

But since Bill had cut the original cut, everything was based on that cut. I just did notes, tried alts, switched performances. Just sort of honing the scene down to get it exactly where it ultimately landed with James.

Can you think of any specific notes or things that had to be addressed?

HOY: Specifically with the Daily Planet scenes we were always concerned about the pacing. When Clark entered, we played around with how long the conversation was on the phone with Ma and Pa. Then with Steve Lombard as he’s walking in, we focused on that section.

When our composer came on that was a scene that we never had any temp music in, but our composer wrote these Daily Planet cues.

Once they got dropped in it addressed any concern we had about pacing. We had done a lot of picture work obviously before that to get the scene where it was, but I felt like the score was just the icing on the cake.

It’s interesting that you had not spotted those tracks for the composer…

ALPERT: For those two specific scenes, we were not planning on scoring. As far as the other films I’ve worked on, it’s not often that the composer writes music in scenes where we haven’t spotted it, but we were all open for it and it worked.

HOY: It worked really well.

Ian, what kind of work are you doing in sound that you’re trying to use the Media Composer tools instead of something like Pro Tools that has more ability.

CHASE: I love Media Composers. There’re some really great features that are inside Media Composers that don’t exist in ProTools, and I’ve begged ProTools to put them in.

One of the coolest things is you can take a plugin preset in AudioSuite and save it as a preset. You can also put it in folders and bins so you can drag and drop that onto clips.

You can’t do that in ProTools. In ProTools you have to cycle through these presets and go into the menu, so it’s very clunky. I think it’s a lot faster and you can share the presets much quicker in Media Composer.

A good example would be The Hammer of Boravia. He had a voice filter, so we saved a setting for that. Each of the rooms - if we went to the Hall of Justice or the Daily Planet and we had a piece of ADR that we wanted to put reverb on, they had their own presets. So you can just drag and drop. You can’t do that in ProTools.

One of the other features that’s great is the ability to stack AudioSuite plugins. In ProTools when you AudioSuite something, it makes a new file and it’s destructive, so it renders that.

In Media Composer, you can put an EQ on, then you can go to the second AudioSuite and put a compressor, then you can put a DeEsser, and even though it’s rendering it, it’s all non-destructive in the sense that you can remove those at any time and change it or conform it.

In ProTools, you can’t go back to the original. So what a lot of people do is they mute the original and store it underneath. And that just becomes very clunky because you’re carrying all these muted clips and you’re always kind of panicked like, “I might have to go back to the original.”

But in Media Composer the original is always there and you can change it out and you can drag and drop. So those are really great, especially when you are working very fast towards a goal. It makes it very collaborative as well.

Were you doing anything to the normal voices to make them cleaner or clearer, or were you only doing things like the Hammer of Boravia that needed obvious effects?

CHASE: We worked with Katie Wood in the beginning, who was the dialogue supervisor, and she gave us a roadmap of what to avoid because she didn’t want us to tie her hands and limit her creative ability.

She also gave us a lot of great advice on settings and presets. So we had all these stock plugins in Media Composer, but we also had Izotope RX 10, which is a great plugin suite - not just for EQ and compression and DeEssing and de-clicking and denoising.

One of my favorite features in RX10 is they have a DeEsser that you can put on,  so if dialogue got a little sharp, instead of EQing it and making the whole line muddy, you can put a DeEsser on and tune it to where the sharpness is so that you keep all the presence, but you get rid some of the spikiness and harshness. So all the clips - or most of them - probably had some sort of EQ, some sort of compression, maybe a DeEsser.

And then for sure we would AudioSuite these reverb tracks either in stereo or in quad, depending on the size of the room to kind of bring the 5.1 element in and really make it feel like you’re listening to a theatrical film, even though it might just be the director’s cut.

Everybody’s excited to watch Superman, even in the early stages, so the fact that you can apply all of these presets and really bring the soundtrack to life…

We have Dave’s elements, Katy’s elements, and it just makes the whole thing exciting. It’s cool that Media Composer can carry all that automation and play just fine.

Superman timeline for Reel 1

I would think that mix helps the editors too, because you don’t have to waste days going out to do a temp mix for a screening.

HOY: Absolutely. A temp mix takes time. Editing wise, you have to have “pencils down.” Sit on the stage for five days, because your picture has to be up to date.

You can’t continue to change it. The good and bad of that is that if you are mixing in your Avid, then you can edit up until the last moment, so you can end up being there very late at night mixing for a screening the next day.

In the early stages I much preferred that because those five days that you spend on the mixing stage, you can actually be working on the picture and trying to perfect certain moments, or at least modify them and get them to a place where you feel good about it.

The visual effects are constantly coming in the day before the screening. We get a dump of effects, so we then go in the Avid and try to tweak it and I’ll walk into Ian’s room, “Hey, I got these VFX which we need sound effect for this because before there wasn’t anything there. I liked having Ian down the hall because otherwise your picture would be five days behind.

CHASE: Bill would come to my room and he had all these sticky notes. I’ve collected all of them. 200 sticky notes! All caps: “IAN, DOG BITE” or “GREEN LANTERN” or “WALL SMASH”

HOY: There was no exclamation mark after the all cap!

ALPERT: I’m gonna have to start using Post-Its.

HOY: We always give the visual effects departments a deadline of days before a screening but we never stick to it because at the last minute there’s always amazing shots that come in.

If we weren’t mixing the sound for our early screenings in the Avid, I don’t know that a lot of those shots would’ve made it into the screening because we wouldn’t have had the time we needed to cut sound, design and address ’em.

I found out on a film like this it’s almost essential that we were using the process that we were.

Let’s talk a little bit about the structure of the film and the things that you discover as you see the film in context. You cut a great scene - like the 12 minute scene of the interview in the apartment - and it seems great, but then what happens to it when you put it in the context of the whole film? Then what kind of creative decisions are being made?

HOY: You’re discovering what this world is with this particular Superman. In the beginning - especially when I watch it with an audience - is it too slow in the beginning?

But the audience seems to respond to what’s going on because they’re discovering: Well this Superman bleeds and he’s got robots and Clark and Lois already have a relationship.

The audience is discovering it. We already know - as people working on the picture - we’re not sure if it’s gonna work. But later on, when the action starts happening, we would go between Superman fighting Ultraman to Jarhanpur and back and forth and the Rift.

I think ultimately what we did was we streamlined the whole thing, lost some scenes in between because it just didn’t have a flow because when I first read the script and when we first put the picture together, Superman’s getting beat up a lot. When is he gonna have this moment?

I think it was the third act that we had to address the most. There certainly there were some scenes in the pocket universe that were slowing things down in that middle section.

That was kind of a self-contained place in the movie, but everything else played out in a linear fashion until the very end where things were being cut back and forth.

The pocket universe runs about two reels, and even though we do slightly jump outside of the pocket universe, we did early on sort of have a pacing problem there. We really needed people to stay engaged.

Music and visual effects help that, but we did just slowly trim it down and make sure that we were cutting in and out of that pocket universe and Superman’s struggle within it sort of at the right times until we landed where we ultimately did. And that big action scene in the middle of the pocket universe really helped us a lot.

What kind of sound was being done in there?

CHASE: What I learned from Bill and Craig is that when you understand the story - when you’ve sort of diagnosed that almost as like a doctor about what you need - everything sort of becomes clear and focused in the pocket universe.

It was an exciting place to go to. We had never seen anything like this before and there are these really awesome portals that go in and out. I remember talking with Dave early on about these portals. They’re kind of broken. Lex had built them kind of haphazardly.

They should have an element of danger. When we go through them, we should feel the sound move through the theater as if the audience has moved into a new location. We really want it to be really wide and you can find some really cool recordings.

There’s a recording of the San Francisco Bridge a few years ago where the wind was hitting it and all of the slats in the bridge were making these ghostly kind of tonal elements, almost like the bridge was singing. So you can find sounds that are very organic, but have an other-worldly sound to them.

You place those sounds in the front and in the back. And then we added all this movement, because sometimes they’re going through on a craft, so you get a lot of sense of movement.

There’s the prison in there. There’s obviously the river. Dave is truly a once-in-a-generation sound designer. Sometimes we’d need something in an hour, so Dave would tell me to do my best and he’d come back in later to work on it.

I would put something in and we’d think, “That sounds pretty good.” Then a day or two later, Dave would send something, and it’s amazing.

I was really excited to work with Dave. It’s just such incredible work so it was cool to kind of be able to try it first, then see what somebody like Dave would be able to do.

Superman’s biological parents have this message that Superman likes to listen to, and it plays a huge part in the plot of the film. How are you cutting those scenes when you don’t have the VFX and SFX that make up so much of the plot?

HOY: Originally, they just recorded his parents’ audio so that we would have tracks to put in to the scene itself. Just dialogue as the robots are putting Superman on the healing bed. Later they actually scanned his parents in 3D for the VFX.

What we got to work with was basically a witness-cam of them. We used that for the picture where they are being projected on the Fortress of Solitude. That’s what we used for placement and timing and all that.

Can you explain what you meant by the witness-cam?

HOY: When they 3D-scan any characters in a movie, they have all these cameras that surround them, that capture them in a three dimensional space. These cameras then form the digital information to make Bradley Cooper in a 3D space so that they render him.

The witness-cam is just the picture from another camera that just shows his performance. There are all these focus squares and everything behind them and all these cameras behind them, so that’s not ultimately the picture that you’re gonna use.

It’s just a template to place temporarily in position where you want it, and also to give the visual effects people the timing and which performance you’ve chosen.

You use the witness-cam to pick the performance then send it off to them, then they give it back to you as rendered 3D.

That’s a process that you went through with the Planet of the Apes movies. Correct?

HOY: Very similar.

Ian, what’s a interesting thing that people might learn from this method of having an embedded sound person. What is the value of that and what you are doing day to day?

CHASE: I think the value of the embedded sound person really comes down to three categories in terms of the job description. The first is in having good communication between the sound team and editorial. I’m talking with Bill. I’m talking with Craig.

Ultimately being a support element to them frees them up to just be thinking about the picture, to help out with any of the audio tasks needed, whether that’s addressing James’ notes or doing some sound editing, or placing the material that Dave and Katy are sending.

The second thing is an interpreter. Sometimes it can be challenging to get an AAF from the post sound team. You don’t know what to mute, you don’t know what to keep, you don’t know what’s to replace. You don’t know what to pan.

These are all decisions that Bill and Craig can make and are capable of making. Having somebody there who can kind of massage it a little bit or decipher it, that seems to go a long way.

Then the last bit is a support element to the post sound team, because we’re really an extension of them. I think it’s a great way for the sound team to interact with the post process because you can get ideas in earlier, kind of build some sense of confidence in the room because it’s not just editing. For example, we noticed was that there was a 10 db difference between Bill’s room and Craig’s room.

That makes a big impact when you’re trying to make the reels feel consistent. Bill’s room was super great, super consistent, then when Craig moved in, they set it up and um, it was just a little bit louder. You’re making your decisions about levels based off of that.

In one of the assistants’ rooms the left speaker wasn’t even on. He came to my room to tell me that there was a place where the dialogue drops out.

But it’s because we had it panned. He thought we were missing a character’s line. So sometimes it’s just building a level of confidence that the rooms are good. The theater’s good. It’s helping out with turnovers. It’s kind of this overall support element that really bears fruit in a lot of different ways.

This is such a peek behind the curtain that most sound people don’t get because we come in at the end of the process when the film is near edited.

They’ve gone through all of the storytelling and we’re just there to enhance it, so to be able to sit down and watch somebody tell a story…

A good example of this is there’s a scene where Superman watches somebody get killed. I had done all this sound design building to this moment, then the gun goes off and there’s nothing. Bill said, “This is some really great sound design, but the storytelling isn’t quite there.

The story isn’t that this person gets killed. It’s that Superman has to live with it afterwards. For the first time, Superman’s feeling weak. He’s realized he can’t save somebody. He’s feeling depressed about this, and we’ve never really seen that in a Superman movie, so your sound design is lopsided.

It actually needs to go to the other side. You’re selling this moment before we even have it. So you need to reverse your order.”

I never would have thought about that! So I really feel like the mindset of sound editing has changed for me by working with picture editors because they just view everything from a point of story, and why is this scene in the movie? So it’s really a great education.

Craig, tell us about a scene that you worked on that was rewarding.

ALPERT: Probably for me is was the final scene in the movie where we introduced Supergirl. I found it was really rewarding to introduce the next character in the next DC movie.

And James and I played around with Robot Four - if he was gonna introduce himself as Gary. There were just a lot of small things in there that we experimented with that always seemed to get a pretty great reaction by the audience.

So I think that that was probably one of my favorite scenes. You have to set the tone of what her character’s gonna be in her film.

The film already sort of ended in Metropolis, so it was always sort of the challenge of having another scene tagged on to the end of the movie and bringing Superman back to the Fortress of Solitude.

It’s always the question: did we overstay our welcome? Are we introducing her correctly? So those were sort of the discussions that we had.

HOY: One of the ideas was that there were too many endings and so we took out the scene between Terrific and Superman looking at the crack in the wall, which now appears at the very, very end of the movie after the credits.

And as Krypto was evolving, that’s when James said, “We’re gonna put Krypto in here.” That just made for a wonderful moment because it just changed the whole complexity of that area.

We did have too many endings. Then once we get to Supergirl - which is a big payoff - we certainly want to keep that. So I think the idea of losing Superman and Mr. Terrific and putting them at the end of the movie was the right thing to do.

How late did that decision get made to pull that into the post credits?

HOY: Probably about six weeks before we had to lock picture. Maybe a month. We had to give them time to render Krypto out, so that’s why I say six weeks.

ALPERT: Yeah, it was a bit of a rush job to get Krypto in there, but it did ultimately pay off.

What about you, William? Did you have something that you can recall that you were either challenged by or was very rewarding to edit?

HOY: Finding the pace of the third act. It wasn’t one scene. I mean, the scenes existed on their own from scene to scene. They were pretty well constructed, but it just felt like we were not focused on what was important because we were trying to tell too many stories at the same time.

We’re in Jarhanpur, then the palace, then we have the Justice Gang show up and Superman is still fighting with Ultraman, then the Rift is headed toward LexCorp building. That scene actually was placed much later in the movie.

Basically, at the very last minute we took that scene, placed it where it is now, when the destruction was happening in Metropolis. So that was a last minute move. James was worried because he says, “I don’t like doing things at the last moment.

I have to look at that again.” So he looked at it again the next day and said, “That’s where it should be. That’s a no brainer. It should be there.” It worked so well there. But, we had another scene with Krypto and Terrific that we took out.

We were jumping in a lot of places trying to tell a lot of stories because that’s kind of the nature of the movie. There’s a lot of characters in there, but the through line is Superman. We should follow him. So I think that was very challenging.

Ian was very close to it at the very beginning and he saw the first evolution of that whole third act changing, but it continued to change when the visual effects came in and just continued to change until the very end.

So that was a very challenging part because it encompassed two reels to have to juggle things and move things around. We had temp music and new music coming in. The music is scored so that it goes from one scene to another, so once you move the scenes around, the music becomes all disjointed.

The music was working just fine before, but now, because we took the scene out it doesn’t work anymore. So, the composer’s going crazy. Our amazing music editors, say they’ll fix it. Paul Rabjohns, our music editor, is fantastic.

The great thing about having the music editors and  everybody here is so that we can talk about the story. “Look, this thing has been shortened. We need to tie this in because the music is not tied in anymore.”

Either the composer has to write something completely new or Rabjohns has to edit something. That was the challenge because that editing happened quite late in the process.

Ian, did you help with choosing what kind of audio track sequence there was gonna be in Media Composer, or was that just being decided by William and Craig and you were trying to maybe clean it up a little bit as it came through to you?

CHASE: My goal is to sort of be a chameleon and adapt to the format Bill and Craig are used to working in. I’m kind of like getting a roommate you didn’t ask for. My goal is to really just match however they like to work.

I haven’t looked at the tracks in a while, but I think we had maybe five or six dialogue tracks and then maybe 16 sound effects tracks, then maybe four music tracks.

The cool thing that we’ve never done on any other show - and I’ve talked to a couple of the other people who have been in this embedded audio position before - we always bring two computers, one for ProTools, one for Media Composer.

That isn’t new, but what’s new on this show is that we connected an ethernet cable to the two computers, and if you go into the settings of ProTools and Media Composer, you can slave them together, so when you press play on ProTools it also plays on Media Composer.

When you type in timecode in Media Composer, it goes to that location in ProTools. When you make a selection or Fast forward or Rewind, they’re in sync. So I get all of the power of ProTools, then we would show Bill or Craig, “Hey, this is where I’m thinking.” I had two interfaces.

So you could listen to the dialogue and music from Media Composer and the 5.1 from ProTools at the same time in sync on a TV. They could make comment: “Change this. Change that. I don’t like this.”

Then I would export out of ProTools a 16 track AAF, bring that into Media Composer, and it had all the panning, all the automation, and it sounded exactly like it did in ProTools.

Then for turnovers, the handles and the fades and stuff match, but we never had to go back to the original ProTools session because now it just lives in Media Composer.

Because of the track limitation you can’t put everything in, but you’d be shocked at how much audio we got into this movie. It really ate up sound. As Dave and Katy sent material, we would just mute mine and replace it with theirs.

Were all the tracks in Media Composer 5.1 surround?

ALPERT: We would bring in mono and stereo and we just output it as 5.1. So we panned everything around.

HOY: All the dialogue was being cut on mono tracks. Then we did have mono effects tracks. As the sequence was built we went lower down in the timeline, then they sort of turned into stereos, then eventually they all turned into 5.1.

We had to manage our tracks because the Avid - after 23 tracks  - refused to play some of the 24 or 25 tracks. So when we begin the picture, I go through with my assistants and decide how many mono tracks do we need for dialogue? ADR?

How many mono tracks do we need for hard effects? Then we get into stereo tracks, the backgrounds and all that. Then we designate the 5.1 tracks. How many 5.1 tracks do we need? Maybe 3. Maybe 4.

Ian or the sound effects people would generate, those 5.1 tracks, then two stereo music tracks so we could checkerboard them if we need to. We were working with at least 21 or 22 tracks. Three 5.1 tracks, a whole bunch of stereo tracks and the mono tracks for dialogue and the effects.

That’s how we start. This particular environment, I think we can just use a stereo track or this environment needs a 5.1 track.

Then ultimately we get to the temp stage and we then have 5.1 tracks on the temp, then everything is married together, so if we ever wanna separate it or something changes, then we have to get a new 5.1 tracks.

So it becomes a bit of a pain. So we want keep it kind of streamlined at the very beginning when we’re still trying to put the picture together. When we get a little later into the picture, it becomes a little more close to being the final, so we can handle smaller changes.

ALPERT: But I do think it’s amazing how much the Avid has evolved to allow us to do this much sound work in there, because when we started, we were cutting stereo tracks and then Ian’s panning into the surrounds and the subs - all just based on stereo sequences, which is amazing.

Avid timeline screenshot for Reel 3 of Superman

Gentlemen, thank you so much for your time. I really appreciate all of you being on Art of the Cut.

CHASE: Thank you.

HOY: Thank you, Steve.

ALPERT: Thanks Stephen.

Please select your language

The website is currently localized into the following languages