Working in some capacity on the entire Star Warsfranchise—supervising the prequels, and the 1997 special editions of the original trilogy—Knoll received his first story credit onRogue One: A Star Wars Story, alongsideGary Whitta, also overseeing the film’s staggering digital face recreations and seamless, Oscar-nominated effects.
Speaking with Deadline, Knoll describes the ways in which visual effects have evolved since his early days at ILM, giving his thoughts on those Tarkin recreations and their implications for the future of entertainment.
As a visual effects supervisor who has worked on the entire Star Wars franchise in some capacity, but Rogue One marks your first writing credit. How did that come about?
That wasn’t part of the original plan. It was shortly after Kathy [Kennedy] came onboard, and the announcement that there would be these standalone movies that were self-contained adventures that took place in the Star Wars universe seemed like a really exciting possibility.
I started informally pitching this—”What about the rebel spy mission to steal the Death Star plans? Imagine SEAL Team Six in the Star Wars universe, sneaking into the most secure facility in the heart of the Empire’s military industrial complex, against all odds, to steal the Death Star plans.” People’s reactions were, “Oh, yeah! I can almost see that movie.”
You’ve been at ILM for decades. In broad strokes, how have you seen the field of visual effects change since you started there?
I came in during the era of models, motion control and optical printers. ILM had just started its own computer graphics division, after the Lucasfilm computer division had been sold off and became Pixar. ILM saw the value of all of that from the work on Young Sherlock Holmes and Star Trek II—computer graphics is going to be something. It’s going to affect how we do what we do. We started a computer graphics department to follow up on this initial, pioneering work done by Lucasfilm computer group, and I was around while that was really starting to take root.
I’d been a motion control camera operator for I think three years before this. Dennis [Muren] asked me to move over to computer graphics department to keep an eye on the work as it flowed through the department, so I was kind of an early advocate for use of computer graphics in visual effects, and seeing its potential. It’s mostly that digital technology, especially digital compositing and its synthesis, that had a huge impact on how films are made now.
There are things that I am nostalgic about from the “good old days.” I loved motion control cameras, actually. I love the way they sound. I used to do a lot of miniature work, and it’s still warranted, but it’s done less often, largely for budgetary, schedule and flexibility reasons. One thing I’m not nostalgic about is optical compositing. In the end, re-photographing elements on an optical printer put an upper limit on the quality of everything that we did. We only got so many whacks at the comp before you started getting diminishing returns, where the elements would start getting dirty, or getting scratched. There’s things that you just couldn’t do with an optical printer. Now, with digital compositing, most of the energy that goes into a shot goes into the aesthetic issues of: Is it a good shot or not?
A lot of what ILM has been pushing on, over the last few decades, has been to try and remove the constraints from filmmaking. The “bad old days” were if it was going to be a matte painting, you had to lock the camera off and it couldn’t pan, tilt or move at all; or if we were going to add an effect to it, it couldn’t be a handheld camera. We’re trying to strip all those things away so you can treat visual effect scenes like any other part of the movie.
There’s a seamlessness to the visual effects in Rogue One. How did you manage to retain the aesthetic of the original Star Wars films—achieved through the use of physical models—but also provide something more modern in appearance?
I think it’s an important part of the visual effects supervisor’s job to get really deeply embedded in production, and keep us all focused on trying to generate the best result. I’m not proprietary about, “I would rather do this effect than let physical effects do it.” No, let’s do the smartest thing for the movie. I think that there is an apparent seamlessness that comes when there is no stylistic break between visual effects and any other part of the movie, when the texture of it just flows like anything else.
To the largest extent possible, I try to make it so that the crew can ignore that something that we’re doing has visual effects. For example, the K2SO character, he’s computer generated throughout the movie, but the choice to have an actor in a motion-capture suit playing him on set—if you can look past the motion-capture suit, then the rest of the crew can completely ignore that he’s going to be a computer-generated character. He’s there on set to interact with the other actors, to read his lines, to play his performance off them. The camera operator has got somebody to frame up on. Apart from the fact that we’re going to replace him, everybody else just ignores that he’s going to be modified later.
There’s some footage from the original trilogy featured in this film. Did it really come from the cutting room floor?
Yeah, there’s a couple of shots. I think it’s four in total in the space battle. One of the things that’s kind of delightful about this story idea, of how it mates up right to Episode IV, and that gradually as the film is playing on, you’re starting to see this happen. You’re seeing more and more things that, “Oh, I know that! Wait, that’s that! That’s that guy!” We figured that in the space battle, a lot of those pilots would necessarily be the same ones that are in the Battle of the Death Star in Episode IV.
Gareth [Edwards] inquired about, “Hey, can we look through the dailies of all of the spaceship cockpit material from Episode IV and see if there are any unused bits, lines of dialogue that we could use where it’s a character we recognize, but saying something he didn’t say in Episode IV?” We had two shots each of Red Leader and Gold Leader.
It was a bit of work to take some 40-year-old, 35-millimeter footage, plus the fact that the negative’s been carefully preserved. It’s sort of grainy, and the negatives faded a bit, and to try to get that to match into the stuff that was just shot a few months ago on an Alexa 65…
There was some cleanup work done there?
Oh, yeah. It was de-noised. Like, the first shot you see of Red Leader, that one actually had me a little worried, because the original, that setup, had been underexposed pretty significantly. There was no shadow detail at all; it just went black. That just wasn’t the look of the rest of this movie. We did a lot of work to try and restore that, beyond just the de-graining. We made mattes of a lot of different parts, like his orange flight suit, to color correct and kind of compensate. A lot of work went into that little two-second shot so it didn’t pop as being visually unlike anything around it.
Top of mind for many Star Wars fans right now are the digital recreations used to recreate the late Peter Cushing’s Governor Tarkin. What went into pulling that off?
It’s similar to doing a historic figure in a movie, if you’re doing a biopic and you need to see somebody famous. You know from the beginning you’re going to cast an actor to play that role, so it becomes a casting challenge. Just like if you’re doing any important historic figure, you’re casting for multiple different constraints simultaneously. You need acting talent; ideally you need somebody who can, to some extent, mimic the mannerisms and way of speaking of the person they’re portraying, or someone whose physical build is close. Lastly, in the ideal world, that person does the voice, as well, so you’re not trying to ADR a performance from somebody else. It’s the product of one mind at one time.
When you’re doing a historic figure, usually there’s some effort made to alter the appearance of the actor to look more like the person they’re playing, whether it’s just hair and makeup, or an extensive prosthetic type thing. So conceptually, doing a computer-generated face replace is the same thing that you’re doing when Anthony Hopkins plays Alfred Hitchcock, except instead of makeup, we’re altering his appearance with computer graphics.
What possible legal or logistical implications do you foresee if these techniques become more prevalent in entertainment?
From the very beginning, there was a lot of talk about, well, should we do this, or can we do this—first of all, in terms of, is this something that Peter Cushing would object to, if he were around to object to it. This was done with a great deal of care and affection, and I like to think the role we created for him is one that he would have been really happy about. I think it’s a good role—he’s got a good, meaty chunk of dialogue, and the role he plays is important to the film. And we do know from interviews he did that he really enjoyed his association with Star Wars, and would have loved to be in sequels, had George [Lucas] not killed his character off. Then lastly, we got the blessing of his estate. We wouldn’t have done it if they didn’t want us to do it.
Actually, the same thing with Carrie Fisher. Tragically, Carrie died before she could do press about it, but she was involved and had seen the work, and she loved it. She thought it was really cool. I’ve seen some comments out on the internet along the lines of, how long until we don’t need actors anymore? That’s really wrongheaded, because this is not some kind of sinister conspiracy to eliminate actors, because we still hired an actor, just as we would any other way. That performance was not something that a bunch of people typed up on a workstation somewhere. That was Guy Henry playing that role. And then on top of that, in addition to hiring an actor and paying him his rate, and all the usual things, then we applied a lot of very labor intensive tech and artistry on top of that to make him look like Peter Cushing.
So that’s not going to get rid of actors anytime soon, because the acting has to come from somewhere. If anything, the way to think of it is, it’s a way for actors to play roles that they couldn’t play before, to extend their range in the kinds of roles they can do. I actually think it represents a lot of exciting possibilities for performers.
Is that the major possibility you see for this technology in the future of storytelling and entertainment?
Yeah, this is something I don’t think you’re going to see a ton of. People aren’t going to do this willy-nilly, because it’s very difficult to do, it’s expensive to do…So it’s not going to be done casually. It’ll be done when there are good, compelling storytelling or dramatic reasons to do something like this. But I use the analogy of makeup, in that you can now transform an actor to look like someone else, but in a way that you can’t do physically with makeup, you can do subtractive things that change the whole shape of their head in ways that you can’t do live on set. Actors whose physical resemblance might not be close enough to the character that they’re going to play to now play that character, because they have the acting talent and ability to embody that character.