Post-Production

Highlights vs. Significant Highlights

As a television professional who spends most of his time dealing at the very end of an often lengthy, exhausting, all-encompassing process known as documentary film-making I’m often asked, “What could I have done to make my film look more... filmic?”

The First Law of Filmmaking tends to read along the lines of: Know Thy Camera

Eric Escobar’s excellent blog has a recent post that deals with this issue. He shoots the same image with two different cameras. One camera is the HV20 shooting with a 4:2:0 codec the other is the EX1 shooting at it’s highest quality at 4:2:2 with a lens adaptor.

Clearly the EX1 wins this shootout (if you click through, give the image a few moments to download). The HV20 is downright ugly in comparison.

Now - I don’t care at what frame rate you shoot, the EX1 is far more filmic. Yes? Will 24p make the HV20 feel any more cinematic? No way Josť.

Eric is onto something here... Know Thy Camera.

He mentions that he had trouble with the HV20, fighting all the auto controls of this consumer-oriented camera. Whereas on the EX1 he was able to get the exposure he wanted. This, I think, gets to the crux of the problem. And it’s a problem that I was reminded of recently re-reading the terrific book “Professional Photoshop” (the link is in the sidebar on the right). It’s the issue of Highlights vs Significant Highlights.

Go back to Eric’s post and look at those two shots again. To my eyes the biggest difference (besides depth of field) is exposure. On the auto settings the HV20 sees the brake-lights of the cars and the bright patch of light of the sky and thinks, “Gee, those are the highlights. I must protect for those highlights.” The camera ignores that this is a generally low-key image and acts as if the most important part of the image is the sky. Nothing could be further from the truth.

Eric, using his eye and experience, knows better than the auto-iris and sets up the EX1 much differently. Although he says he’s protected his highlights, he’s actually let his highlights blow out in the blur of a short depth-of-field and selective focus. He’s made the choice that the highlights of the sky and car lights are insignificant and instead chosen the significant highlight in the woman’s face. And he exposed accordingly.

The HV20 has the truly important part of the image, the woman’s face, completely compressed into a narrow range - as seen here in FCP’s waveform. In post, when we dig out that detail we’ll be pulling up noise and degrading the entire image. In that process we’ll let those highlights blow out because... who cares??? We want to see the babe!

And this takes me to a discussion I had recently with a colorist friend of mine who opined that he’s tired of the “protect your highlights” mantra. I tend to agree with him. We’ve both recently seen too many filmmakers walk in our rooms with footage that protects for the sky out the window and buries the truly significant detail - like human faces - into the bottom 30% of the waveform. No, not even Red can completely save you.

As Dan Margulis says in his book, there’s highlights and there’s significant highlights. Based on what I see coming through my doors I say filmmakers need to make sure they protect for the Significant Highlights and let the rest blow out. Especially on a camera like the HV20 where it’s far more damaging to try and dig out an underexposed face than to let a window blow out to white.

- pi

 Subscribe in a reader
|

Errata - BluRay & Compressor 3


In this previous post I lamented how Apple seemed to be dragging its heels on providing BluRay authoring tools in its Pro Apps suite.

I got at least one fact wrong: Compressor 3 does export for BluRay.
Compressor4_000


Where did I go to find this out? Adobe!

Specifically, the DAV TechTable blog - which is filled with useful how-to's on BluRay authoring and I've placed into my RSS reader (now that I'm an owner of the Adobe Production Suite CS3 bundle, which supports BluRay authoring on the Mac).

Here's the post which gives explicit instructions on how to export from Compressor for BluRay authoring in Encore DVD. It's not a built-in preset in Compressor, so you'll want to build and save these settings as a Custom Preset.

If you're a glass half empty person, you've got to wonder why this setting isn't shipping as a preset in Compressor. Is it an ominous sign of Apple trying to keep its boot on the neck of BluRay? If you're a glass half full person, hopefully this is a positive omen that the next version of Final Cut Studio will have much more explicit support for BluRay authoring.

- pi

 Subscribe in a reader
|

Sour Apples

I first heard the name "Final Cut Pro" in November of 2001. This was when a producer asked me to get up to speed on it for a corporate gig the following January. It was probably the very next day that I read online that Final Cut Pro was going to be sold.

It's a rumor that won't die.

Ever.

This years' rumors have a slightly different tenor. Apple pulled out of NAB. For whatever reason they state, with $18 billion cash in the bank - money isn't the issue. Or - at least, potential access to money isn't the issue. This non-MBA imagines that Jobs forces each division to stand on its own and if ProApps has money problems such that they didn't think a booth was worth the expense... perhaps they're having trouble meeting their margins. At least Avid has an excuse for its NAB disappearing act that Apple doesn't, Avid is undergoing a major re-organiztion. They'll be back at NAB once their new strategy is ready to roll.

If you want to read what I consider the most interesting analysis on Apple selling ProApps, then check out this article by Robert X. Cringley.

Cringley's analysis helped me gather my thoughts on something else that is bothering me about Apple's handling of its ProApps division. And its has me starting to wonder if Apple is the best company to manage the Final Cut Studio array of products. Specifically, it's Apple's handling of BluRay that's the heart of my misgivings.

None of Apple's ProApps support BluRay DVD creation. Final Cut won't export to BluRay. Compressor won't encode to BluRay. DVD Studio Pro won't author BluRay. Not a single Mac ships with BluRay playback or burning. And my wife's business is getting weekly calls for BluRay duplication and authoring.

For the first time in my memory, Apple has fallen behind my customers!

Why? Why? Why is Apple forcing me to consider buying Adobe Encore or (hissssss) a PC-based authoring tool for a need my clients want today?

It drives me nuts that a company so forward-thinking is dropping the ball on next-generation content creation. As Cringley points out in an earlier article on Apple's (lack of) BluRay strategy, the answer is probably summed up in one concept: High-Def Downloads.

In other words: Apple's consumer strategy is now at odds with its development of its ProApps product line.

Is it possible that Apple no longer deserves to handle the ProApps division? Has Apple finally reached its inflection point where it will sacrifice its traditionally strong and loyal ProApps customers for its newfound success in content delivery?

I don't know.

I know this: For the first time in 7 years I'm not discounting the Cringley analysis. For once, the rumors may be true.

If Apple does sell the Pro Apps division at a time when it's still holding back on delivering BluRay creation tools... I'll say, good riddance - it was a great ride but it will have been time for both businesses to move on.

UPDATE 1: Not everyone buys Cringley's analysis.

- pi


 Subscribe in a reader
|

Color'ist Stamina


I'm just bubbling up to the surface after 2.5 weeks of non-stop Color'ing. 10 hour days. 600 shots / day. Fair amount of secondary isolations with vignettes. Exhausting. More exhausting than I expected.

I usually have a few intense days of color correction that are followed by a day or two of finishing. I switch gears fairly regularly. 10 hour days are common and not too stressful.

On this past job I was downtown at Outpost Digital working as a freelancer on a Discovery series (all other details of the job are embargoed). I had the luxury of an Assistant Editor prepping timelines for the Color roundtrip and a Finisher to handle the graphics, formatting, & outputs. For me, it was all color correction all the time.

I was able to turn around 50+ minutes of footage, about 600-700 shots, in 13-16 hours across two 10-hour days. Essentially, an episode every 1.5 days (including client revisions). By each Friday I was completely wiped out. Far more so than if I put in a 50-hour week doing finishing. I contacted a friend who's a long-time colorist about his stamina on the job. I was wondering if his eyes were used to the routine.

Apparently not.

While he's no stranger to much longer days - he also finds diminishing returns after the 10-hour day. His words:

"I hear you 10 hours and I'm done.  I start to loose my peripheral vision and I know it's time to rest.  Longer than 10 hours the slower I go to the point when I realize how long this is taking and stop.  I've worked in a few places and when I was at the CBC they had a nice monitor surround to take some of the stress off your eyes.  In my new place I have some strips of white LEDs behind the monitor and until they finish the room this will have to do.  I had a killer week last week and I wish I had the Herman Miller chair that I had from my last company.  This makes a big difference as well."

Dittos on the Herman Miller chair (the one with lumbar support and tilt forward control).

In the future I need to dial back client expectations so my eyes are as fresh on Friday as they are on Monday.

- pi

 Subscribe in a reader
|

BluRay Replication - Paying For Nothing


In a previous post I lamented the high costs of BluRay replication for short runs (less than 5,000 pieces).

These costs can be attributed directly to the mandatory copy protection scheme (DRM) for the BluRay specification. Not only will a company like my wife's (Dubs by Pam) have to pay a one-time fee to place orders on behalf of her customers. Her customers will have to pay a per-title fee. And these fees are non-trivial for these types of short runs, $5500 for the duplication house, $1900 for each title (according to Larry Jordan on Digital Production Buzz).

Why, I ask myself, must they (the AACS) keep ringing us up for copy protection when almost none of our clients want to pay for it now or for the foreseeable future?

Ars Technica has the rather in-my-face-now-that-I'm-looking-for-it answer: The AACS needs to keep paying for continual development of new DRM schemes because they know they'll be cracked every few months. I betch'a if my accountant took a look at their books, that line item on their Income Statement is probably the budget for creating new "uncrackable" codes.

What a joke.

The only way the large motion picture distributors will ever be able to keep their content from being illegally distributed is to implement DRM directly in the human optical system. Otherwise, if they want us to buy their DVDs to watch a movie at home - at some point the signal must be decrypted for the digital-to-analog conversion and that will always be the point of attack. You can't have mass distribution while having a lock-solid distribution method - then it's not mass distribution.

So. The AACS maintains the fiction of DRM for the movie studios and the rest of us have to pay. Literally.

I suppose I shouldn't be complaining too loudly - it makes services such as those offered by Dubs by Pam that much more economically feasible...

Still - the short-sightedness of the whole DRM racket is stunning.

- pi

 Subscribe in a reader
|

The Color Conundrum


UPDATE 2:
It's been commented to me that my opening line, "Color is broken" is a bit extreme. I'd agree - if you work in a purely progressive frame workflow or a purely interlaced workflow that involves no resizing, distorts, or anamorphic flags - Color is fine. For the rest of us... I think it's broken. (In fact, I had a meeting this afternoon where I made clear my preference for progressive with no mixed formats in a single timeline)

But absolutely - decide for yourself if this bug breaks Color for you.

---------------


UPDATE 1:
More on the Geometry Room issue I mention in the original posting - A posting on the Apple message board mentioned that he uses the Geometry Room to zoom in on skin tones to check them in the scopes to see that they properly lay on the skin tone line (something I first saw suggested in the Ripple Training Color tutorials ). He'd then click the reset button in the Geometry and move on to his next task. In my own testing I've confirmed that this is enough to force Color into flame blending mode of interlaced footage. Pressing reset doesn't help. Once a shot is flagged as having touched the Geometry Room - that shot is toast.

If you have an external CRT hooked up to your system (you do, don't you?) it's easy to confirm that this is happening. Just park on a frame that exhibits the typical jitter of interlaced footage (most evident when there's lots of motion on the screen). Go into the Geometry Room and change a setting. The jitter disappears. Color has suddenly decided to frame-blend this shot. Click Reset. The jitter doesn't re-appear (like it would in previous versions of Color). The shot is still flagged for flame-blending. Switch to a new grade. Still no jitter. Whatever else is happening, switching grades doesn't fix the problem.

I haven't found a workaround to this particular problem.


--------------------------

Color is broken.

But before I get to the specifics, some quick background.

There's an old problem that dates back to Color's Final Touch days, before the Apple purchase. In those days (and to a certain extent, these days as well) you had to be very very careful how you handled interlaced footage. Color was originally designed for high-end Digital Intermediate work - which means it was optimized to for a film-based progressive RGB workflow.

It wasn't until after development was well under way before the original management team decided to open up the software to High Def and Standard Def formats. In doing so, they never really solved how to get Color to handle interlaced footage if that footage had to be blown up, shrunk down, or repositioned. If you "repo'ed" a shot and that shot was recorded on an interlaced codec, all you got back was mush. That "mush" ranged from slightly softening the image to horribly destroying the image, depending on the nature of the content.

To get semi-technical: The problem exhibits itself as really bad frame blending.

When Color was released, Apple decided to avoid the whole "mushy image" problem by having Color ignore all Motion Tab effects and let FCP handle that portion of the job. It was a smart way to address the issue. And it worked. With emphasis on the past tense.

Interlace footage is broken again in Color 1.0.2.

In my testing last week I found that when it comes to handling Standard def footage there was only one way to avoid the "mushy image syndrome". That's by being sure both these are true for any project I send to Color:

1. No repo's, distorts, or anamorphic flags on the footage.

2. The FCP timeline frame size must be a preset that exists in Color. For instance, 960x720 always renders with frame blending - no matter what and regardless of the previous Condition #1.

(Note: A recent posting on the Apple Discussion Board suggests that even doing a "repo" in the Geometry Room and then canceling it out is enough for Color to frame blend its renders)

What does this mean to those of us still working in the SD world?

It means we now have to go through our timelines and strip all motion effects from our timelines before color correcting. And then add them back one-by-one after color correcting.

This is NOT progress. It's been a year since I've had to do this and I had hoped we had put this behind us.

For all the nifty improvements in Color 1.0.2 - for me and my clients - this workflow is not worth the pain. But there's a question that, after a weekend of pondering, I haven't found an answer:

Is it safe to reinstall just Color and upgrade it only to Color 1.0.1?

The Color 1.0.2 upgrade happened in conjunction with the entire Final Cut Studio 6.0.2 upgrade. And that upgrade contain some very important bug fixes within Final Cut Pro.

So do I add a half day to every job to handle the new bugs in Color 1.0.2? Or do I add a half day to every job because Final Cut Pro 6.0.1 loses my renders and I have spend 4 hours re-rendering?

My head's spinning here. And my favorite people in the world who've I've never met (the entire FCP and Color teams) are responsible for it.

Is this the perfect Monday morning blog post, or what?

 Subscribe in a reader
- pi
|

The Blu•ray Blues


HD-DVD is dead!

Long live Blu•ray??

Not so fast. In the current issue of Digital Production Buzz's newsletter (I'd link to the actual piece but the content is refreshed every week) Larry Jordan lays out the costs of replicating on Blu•ray. I knew the costs were high since the Blu•ray spec requires copyright protection - and not just for Hollywood movies but also for your HDV baby pictures.

Or your demo reel.

How much? Here's Larry's breakdown:
  • $2,500 : License Fee to author and distribute Blu•ray
  • $3,000 : One-time fee to AACS. (I think this is billed per production company / individual)
  • $1,585 : Per complete Blu•ray project
  • $.04 : Four cents per disc. Fee paid to AACS
  • $.01 : One penny per disc paid to Sony to handle all these payments on your behalf
So, for your first project licensing alone will cost $7,085. That's in addition to the actual costs of replication/duplication and packaging that we're already used to paying.

Idiotic.

If you were wondering why Sony spread their dollars around so liberally to pay off movie studios, you've got your answer.

It looks like Blu•ray is going to be the exclusive domain of the Studios. I guess the rest of us will have to settle for HD-DVD on standard DVD 5's (except that the players won't be made anymore).

Oh, and who is AACS? Just a couple of guys named IBM, Intel, Microsoft, Panasonic, Sony, Toshiba, Walt Disney, and Warner Bros...

Huh. Suddenly this is starting to make sense...

UPDATE: The Avid-L list was on this discussion a week or so ago. It seems there is some compatibility issues with duplicated (not replicated) Bluray discs depending on the authoring software and the playback machines (which, of course, also bypasses all the fees detailed above). Apparently some Bluray players want to see a copy protection folder, even if it is empty and some authoring apps don't put those folders on their burns.

Don't ask me to confirm this... calls have only now just started to trickle in to our sister company Dubs by Pam asking about Bluray duplication. But this blog is named the Finishing Line and for many clients, delivery will soon become straight-to-DVDBluray.

- pi

 Subscribe in a reader
|

How To Talk To A Colorist


DV Magazine has a nice article titled DV 101: How to Talk to A Colorist:


"[T]he first session with a real colorist can be a bit intimidating for the novice filmmaker. Understanding the basics of what is possible and what the colorist is doing to manipulate the image will help alleviate some of the trepidations you might have going in for your first session."


After going through some basic terminology and example grades the author, Jay Holben, offers up this stellar piece of advice,


"It’s a good idea to start with a defining shot for a particular sequence... but the most important shot for the scene is the close-up of the actress that happened to be the 10th shot for the day. It’s a good idea to start with that 10th shot, establish the look that you want for the sequence on that hero shot and then have the colorist go back and match the rest of the sequence to that key shot."


Sage advice. The whole article is a good read for anyone new to color correction.


Hat Tip: Ted Langdell via Telecine Internet Group email list


- pi

 Subscribe in a reader
|

Review: Magic Bullet "Looks" - Slick, Sexy, A Few Flaws

I've been a long time owner of the Magic Bullet filters. Originally - back in 2002 - I bought it to emulate the 24p "Look" for a job. Even then, I've never been enamored with post-processing 29.97fps to 24p. I've found most filmmakers use it as a crutch. As if it were some sort of... magic bullet. I'd much rather filmmakers forget about frame rate and focus more on framing, lighting, and exposure. Those elements get you far more production value than simply emulating 24p.

But times, they are a-changing. True 24p cameras are affordable and available. With some forethought these cameras mitigate much of the need for 24p emulation (yay!).

Far more in demand is the ability to create the ever elusive "Look". Whether for a flashback, dream sequence, historical recreation, emotional impact, stock emulation, or mimicking an in-camera technique (diffusion) - coming up with some "Look" (always - one which nobody has ever seen before) is a frequent request. The Magic Bullet Looks subset of filters has always been a stand-by of mine - though, unfortunately, it stands-by a bit more than I would have preferred.

Why? It suffered from having to work within the Final Cut Pro (or After Effect) filter User Interface.
Old Magic Bullet UI
The Looks Suite consisted of long run-on lists of numerical entry boxes and sliders. It's like trying to color correct with the 3 Way Color Corrector's numerical rather than graphical interface; it's powerful, but it gets old fast...

..and that sums up how I've long felt about the older Magic Bullet Editors package. Powerful, but it gets old fast.

In mid-October Red Giant Software released Magic Bullet Looks. It's an upgrade for Magic Bullet Editors - and my first impression was, "Wow. Sexy".

New Magic Bullet Looks Interface

My second impression: This is easily the best-looking, best-feeling interface I've seen... anywhere. It's fast. It's responsive. And best of all - the 100+ Look presets all update to show you a preview using the current frame you've got loaded! What a time-saver.

Preset display updates live

The point of the presets isn't to just apply it and move on (you know who you are) - but to use it as a starting point. With the new Magic Bullet Looks, if a client asks for a contrasty diffuse look - I can open the presets tab, reveal the Diffusion presets, and by looking at the small thumbnail pick the preset that seems to get me closest to the desired look. Once applied, I can start tweaking until I dial-in a pleasing result.

In comparison, the ColorFX room in Apple's Color makes much of the same promise as Looks. It has a bunch of prebuilt presets. But the thumbnails provide zero insight into how any particular preset might react with the current image (unless my image is a low-angle shot of the Golden Gate B
Presets display in Color
ridge). Either I've got to go through and apply every preset to find my jumping-off point, or I'll just start from scratch.

I offer this up as my highest praise: In many respects, I wish Looks was the ColorFX room in Color. The nodal approach that Color uses to creating a look is very powerful but very unintuitive. To be fair, Looks doesn't have the kind of repair (RGB split), grain management, and math tools of the ColorFX room. But the Color interface doesn't try to help me along as Looks does. MB Looks has a nifty help feature that describes every filter I can apply as I hover my mouse over the filter. Unfortunately, the help text is unhelpfully located at the polar opposite end of the screen from where my mouse is hovering. The font size of the help text also assumes I've got my 20-something set of eyeballs. Us "experienced" folk need a little more help than that, please.

Another shortcoming of Looks is that as I'm working on a Look, I can't see it output to my external monitor. I'm finding color decisions I make within Looks have to be tweaked once I press the "Apply" button and my monitor updates to show me what's really happening with the image. I don't think I can blame the Red Giant folks, I believe this is a limitation of the Final Cut plug-in architecture. Color Finesse suffers from the same problem when used as an FCP plug-in.

Lastly, it's clear that much thought went into assisting us in designing a look. Looks uses a Subject / Matte Box / Lens / Camera / Post metaphor, guiding a filmmaker in deciding what effects to apply in what order.

Filters are applied in a order that mimicks real-world workflow

For a more general audience I think this is fine. But I would like to be able to toggle into a PowerUser mode that doesn't restrict me from placing filters that exclusively belong to the Camera elsewhere in the chain. At times, I felt more restricted that I should have been. I understand the metaphor / paradigm that Looks is using and that much of its target audience is actually freed by following this logical workflow. Still, I'd like the opportunity to be freed from the shackles of reality when creating unreal or hyper-real looks.

Overall, Looks is a fantastic product. There are a dozen nice little interface elements I haven't mentioned that really speed up user interaction. It's an amazing upgrade for anyone who owns the previous version of Magic Bullet Looks. And if you find yourself constantly trying to implement specialty looks, it's worth the full purchase price. I hope Apple's Pro Apps team takes a close look at this software... while it's missing some of the power features I'd like to see (flexible re-ordering of filters), it has a certain "fun-factor" missing from much of today's professional apps. And the rendered results look great.

Be sure to check out the blog of the creator of Magic Bullet Looks. There's a secret feature that I haven't gotten into that's very nifty.

You can download a demo here.

- pi

 Subscribe in a reader
|

The Vaule of Specialization

Whatever your business, I believe there's huge upside to specialization. Not only does it allow you to become really good at something - it increases your value and helps you differentiate from your competition.

But don't take my word for it...

One of the students in the last Color Correction Workshop I helped teach emailed me the other day. Here's the last line in his email:

Btw, I did a CC job last week on a tv spot...the skill has allowed me to charge an extra $15 per hour.

Yay!

Woo Hoo! Go Harold!

Yes I'm training my competition. As did the editors who trained me 18 years ago... I do this in honor of them.

- pi

 Subscribe in a reader
|

Avid Scared Out of the Water?

At the end of one my favorite movies, The Hunt for Red October, an American hunter-sub performs an emergency maneuver that has it popping out of the water like a giant whale. One of the rescued Soviet sailors screams, "The Captain has scared the Americans out of the water!!!"

That scene reminds me of Avid's recent announcement that they won't be on the show floor at NAB. This announcement was made a few weeks ago and sent huge waves through various online forums as everyone chimed in on what they thought of it. I don't concern myself too much with Avid anymore, as their price point for the software I want isn't at a price I'm willing to pay. But I was once a Symphony guy, the Avid vs Final Cut article is the most popular pages on this website, and the Symphony is an NLE I directly compete with, so I started reading the commentary around the net.

The most thoughtful comes from Frank Capria at Capria.tv. He has some good insights into what Avid hopes to gain out of this strategy. He believes theirs a risky strategy and details some of the shoals they need to avoid. I add: It's not like NAB is going the way of the Consumer Electronics Show where many companies feel they can't get their message out anymore. NAB is very relevant and good products do get their message out from that platform. If Avid is bailing on NAB, then something is not right at Avid - and they now admit it.

What I haven't read is a good reason Avid's been forced into this realization. I don't think it's the pressure from Final Cut...

My last NAB was two years ago - and Adobe unveiled the start of their Studio package. And honestly, it was a very very strong showing. The audio editing tools surpassed Soundtrack Pro, Premiere (today) is only a rev or two away from seriously being able to replace Final Cut. Their DVD solution is said to be top-notch. And of course, After Effects and Photoshop are the winners in each of their classes.

When you plot out from NAB 2005 to October 2007, Adobe has continued to execute - adopting Apple's "Studio" concept to help lock-in users, Apple continues to improve its Pro Tools division, most notably with the acquisition of Color. With Adobe and Apple poised to take direct aim at each other at the sub-$3000 NLE market, Avid's failure to execute in integrating apps it has acquired doesn't bode well for them.

But the story doesn't end there - at the high end, Avid has mismanged the DS and Symphony offerings. They confused many of their customers and never quite differentiated those platforms enough. But at least they had more breathing room. In the $60,000 - $80,000 NLE market they have a proven, turnkey toolset and not much competition - but AutoDesk is changing that...

I was recently involved with a FCP + Smoke/Flame integration demo at a local reseller. I spent 10 minutes of a demo modifying a timeline with effects and text within FCP. Using the same media, the Smoke was able to import that timeline (via XML) and ingest the media and play it back on its drives. The rest of the Smoke demo was very impressive - it's come a long way since I last saw it in v3. And it's a Symphony killer - at least in terms of feature set. (It's also very complex, deep machine that is a tougher transition for the Symphony editor to make than the transition to Final Cut - slowing its adoption rate and giving Avid some time.)

But with a basic turnkey Smoke system running on Linux for $90,000 (compared to that system costing $180,000 five years ago) and impressive media sharing capabilities in an FCP shop, the top end of Avid's market is starting to get squeezed as well.

It seems to me that Avid's NAB strategy this year akin to them stopping living life like the Red October and deciding to become the American hunter sub getting scared out of the water. They're about to get torpedo'ed and they need to differentiate. Fast. The process needs to start last week. And it did, with their announcement

For Avid, the only thing worse than not being at NAB is being at NAB and looking like they have nothing new to offer - for the third straight year in a row.

- pi

 Subscribe in a reader

|

My First Hate Mail



Hate Mail!

I finally got one, sent from the Contact form on this website. The sender was complaining about finding "another editor claiming to be a colorist" and mentioned something about the real-time nature of his color-correction hardware and how he charges "$1000 per hour" (sniff, sniff).

On the one hand, I love the fact that I got hate mail from a "professional colorist". It means the software is starting to make the hardware-based folks nervous enough to start Googling us. And that means our tools are getting powerful. Though not quite there yet - as evidenced by his 'real-time' comment and another comment he made that Apple Color's "secondary tools are crappy". Gee, I guess he cracked open a copy to check it out (though I'd counter than Color's secondary tools are far less crappy than FCP's non-existant secondaries - we're moving in the right direction).

On the other hand, I'm annoyed by My First Hate Mail. Where did My First Hater get the idea I claim to be a Colorist? Certainly this website makes it clear, my finishing skills are broader than just color correction - but color correction is my specialty. I have grown tremendously in that skill set over the past 7 years. I read everything I can get my hands on and then I do it... over and over and over and over and over again.

Yes, I enjoy color correcting 1200 shots in a few days. Tweaking contrast, balancing tones. Yes, there isn't a single show I've worked on that 6 months later I don't look at and say, "I could do that better today." But heck, if there's *any* professional working today who thinks all their work is perfect and they have nothing new to learn - they're on a professional decline or delusional or both. They've definitely stopped growing.

Color correction - and Final Touch (now Color) - rejuvenated my enthusiasm for my career. I originally renamed and refocused my company to specialize in Finishing and Color Correction for very pragmatic reasons (easier to differentiate myself from every other FCP owner working in his mother's basement). 18 months later I discover I love this focus far more than I thought possible. It blends my dormant Director of Photography gene with my Editor gene and gives balance and indulgence to each.

In the end, the writer of My First Hate Mail and myself have this in common: We both make pictures look better. And, ultimately, who decides if our pictures indeed look better and our services worth paying our asking rates? Our clients.

I'll let My First Hater claim the mantel of "Colorist". I'm on the road to Craftsmanship. I'll continue to grow, learn, and occasionally teach. I'll keep trying to base my business on my Skills. I just hope My First Hater does the same or he'll get Moore's Law'ed out of the business. When today's iMac can handle 1080p, how much longer before it can run his DaVinci? In Real-Time? Something to think about...

- pi

 Subscribe in a reader
|

The Law of Unintended Consequences - 6500k Wrap-up

What happens when a finishing room with 6500k bulbs has light spilling in from the hallway because the door is mostly a large pane of frosted glass? Do you cover the inside of the glass with black fabric?

Not me.

No, no, no.

I decide to change the hallway light bulb to 6500k. What happens next falls directly under the header of "The Law of Unintended Consequences"...

You see, my room is at the end of a hallway - so changing the bulb outside the door solves the problem of mixed light temperatures filtering into the edit room. But when you walk down the hallway, suddenly that one light fixture stuck out like a sore thumb. It's a lone brand-new bulb shining in its glory - a full 3000 degrees hotter than any other light in the hallway.

In a world of dull orange lighting, the bright blue bulb became an eyesore. The next step?

That's right, I changed all the bulbs in the hallway to 6500k. The hallway brightened considerably (I figure the previous bulbs were at least 3 years old and were quite tired).

And then came my co-workers headaches. It seems the new bright blue light filtering out of the hallway and into their offices was mixing color temperatures with their 3000k orange overhead fluorescent lights. I'm guessing the constant white balance adjustments their brains were forced into executing tired people out.

So, what's a geeky finisher to do? That's right...

We installed a total of 25 6500k fluorescent lights!

All this because I wanted a properly lit edit suite and didn't want to close in my already small-ish space by covering the door with black fabric...

- pi

 Subscribe in a reader

|

6500k & the REAL point of Industry Standards

Following up my previous post on implementing 6500k lighting in the edit room...

I did a search in the TIG (Telecine Internet Group) mailing list on this issue. Bob Currier of Synthetic Apeture (and creator of the very good Color Finesse color correction plug-in and software) had the discussion-ending post on why colorists follow the SMPTE standard for using D65 light in their suites:

"There is a standard and it's 6500K.

This has nothing to do with making the image in the grading suite match the image at home.

Instead, it has to do with consistency so that all our 6500K standards-based grading will appear compatible when shown on Aunt Millie's badly mis-adjusted 9300K TV. If some of us are grading on 6500K monitors and some on 9300K monitors, things will look rather poor indeed when they air back-to-back. Not only will commercials not match the programming, but commercials won't match each other.

Besides, Aunt Millie likes her over-saturated, blue look. If you start making that look "normal" she'll think you broke her TV."

'Nuff said. Final word. I'm satisfied.

Thanks Bob.
------------------------

On a related note, I also found a link from a TIG posting showing the difference in spectral output from a blackbody radiator (the Sun) and a human device attempting to imitate that black body radiator. If you go to the link, click on the "Back to the calculator" button, then select D65 and then enter 6500 for Blackbody. D65 would be your TV set or your room's 6500k ambient lighting. Blackbody would be the sun.

Try punching in other values - a typical incandescent emits at around D35 (click on the graph to update). Notice how much more red it emits. If your camera was white balanced for daylite (D65) while shooting an interior under an incandescent light, what do you think would be the predominate color?

- pi

 Subscribe in a reader
|

Dr. Daylight - Or How I Learned to Love 6500k

 Subscribe in a reader

2 things have happened in the past 3 weeks that led to my decision to upgrade the lighting in my Finishing Room:
  1. I put money down on a JL Cooper Eclipse CX (at a very good price). More commentary on that device can be found here and here and expect a full review once it's in-house.
  2. I visited the color correction room of a new colleague, Alexis Van Hurkman.
Inspired by both events happening in close proximity and because Alexis turned me on how to do it cost-effectively (a necessary pre-condition); this week the lighting at Fini is going 6500k.

Why 6500k? You'd have to ask SMPTE, the folks who handle television signal standards and decided that the proper white point for television sets is 6500 degrees Kelvin (the equivalent color temperature of daylight at noon in North America (if you have to ask, I suspect they took their temperature readings in Las Vegas)). In theory, the most neutral environment for color correcting video is with ambient lighting that has the same color temperature (white point) of video. 6500k. Or, technically, D65. This way we're not trying to compensate our color correction for the light surrounding us - which, if you're using a normal bulb, is much more red/orange.

One might then ask: Why'd I wait until now to make the change to 6500k lighting? Good question... Thanks for asking. Two reasons, actually:

  1. First, I don't know a single person who lights their living room / home theater with 6500k lighting. And since nearly 100% of my work has been for home viewing - I didn't worry about my room not meeting some industry specification which was designed by soulless engineers in a vacuum (so to speak). I mean, these are the same geniuses that gave us non-drop and drop-frame timecode (not to mention the idiotic array of HD formats and frame rates).
  2. Second, even in the heyday of post-production Standards & Practices (pre-miniDV) - only Film-to-Tape guys ever bothered to meet these specs. Yes, us video folks took our specs seriously back then (how many editors reading this post can figure out if their blanking is too wide?) but the general industry attitude didn't extend to 6500k lighting. Why? Probably because our stuff looked the same at home as it did in the edit room (see the preceding paragraph).

One might then follow up: Why are you doing this now? If the status quo has been good enough for the past 17 years, why bother implementing the change now? Another excellent question. There are several reasons:

  1. Fini started out focusing on providing an array of post-production services, of which color-correction was only one. Providing other more traditional online services was my bread-and-butter. But with the change of focus last year color-correction moved front and center. (Similarly - that's also why Fini is investing in a JL Cooper control surface. Not because it's necessary to create great pictures, but it dramatically increases productivity).
  2. Part of the reason why Online rooms didn't implement 6500k lighting is because, except for sports, almost all our footage came to us color corrected. Either from a film-to-tape session or "shaded" by an engineer in a studio. And outside of specialty tape-to-tape rooms, we had very crude controls over our images. Today, it's almost exactly the reverse. Not only do we have sophisticated color correction tools, 80% of the our work hasn't been color corrected - in fact, it's why clients are coming to Fini in the first place! That shift in client needs has shifted our need for the type of controlled lighting specified by SMPTE, previously the domain of telecine and tape-to-tape rooms.
  3. A recent posting by Martin Euredjian of E-Cinema on the FCP-L mailing list put this perfectly (though he was specifically speaking of color-critical monitors):

    "Audio seems to be easy for people to use as an analogy. I don't think that professionals would propose doing serious mastering work using an iPod. Or an iPod with headphones. And, even if you did connect great speakers to an iPod...would anyone propose doing so without at least attempting to calibrate the thing to reasonable professional-level standards? Would we want to know if we can achieve the frequency response and harmonic distortion targets that are deemed as minimum-acceptable for professsional work? Probably. And, then, would anyone propose to use such a system in a listening enviroment that was devoid of proper acoustic treatment in order to ensure that what was coming out of the speakers was being perceived correctly? Probably not."


Probably not. And that's why I've gone 6500k in the Finishing Room. As I've made the decision to provide more color-critical services to my clients I've got a responsibility to know what the signals I'm creating actually look like. I've got to know that anyone who keeps to professional specs will see what I see. And given that it's no longer hard to find 6500k lighting for your media room, the average viewer has an above-average chance of seeing the image as its meant to be seen.

- pi

 Subscribe in a reader

|

Thoughts On The Tekserve Red Event


I attended the "Red Event" last night at Tekserve.

It was a generally uncomfortable event in which 150 people were jammed onto their showroom floor with inadequate air conditioning (Tekserve is always on uncomfortable place to shop) and stood for an hour. It seemed most people watched the event from screens throughout the store and the tallest people in the room had been given priority access to the first row, blocking everyone's line-of-sight... chairs would have been better.

I'm not going to go into Red workflow specifics because so few people have access to the Red camera. The people that are now shooting Red have workflows that are far beyond the scope of the clients I choose to serve. In a few more months we'll be able to test and refine a Red workflow "for the rest of us". But Red is an amazing technology and it was great to see the owners of Red #6 & #7 presenting to the NYC community.

Here are some of my impressions:
  • Red should be hugely desirable to the Fini client base. It's affordable, accessible, scalable, and future-proof. It's a disruptive technology an order of magnitude larger than Final Cut Pro was disruptive. It will put a lot of people out of work... but give opportunity to far more people.
  • Red is a complex workflow - largely because of its scalability. There will be several unique and distinct workflow's for different deliveries. Some purists will rail against the DV-crowd taking up this camera... they will argue that everyone should be delivering 4K all the time... they will be wrong. But the clients they serve will also feel the same way, so there's no need to worry that the Red camera will bring us all together in a Kumbaya / We Are The World oneness.
  • The Red team isn't telling how many cameras are reserved, only that the number is in the thousands (which I take to mean more than two thousand). Compare that to the number of Vipers and Dalsa's out in the field shooting today - that's as if Apple would have sold 10 million iPhones by November, it's a crazy-big number.

Also showing at Tekserve last night was Scratch - a high-end software-based color correction app. I was intrigued by its power, flexibility, and depth. And unlike Color, it can read the RedCode directly - no need to transcode to some intermediate codec like ProRes. But at $50k a seat - it's not for my clients. It's priced for facilities running the Autodesk products (Flame / Smoke). In fact, the GUI looks like Autodesk funded the project. It's a total and complete Flame rip-off. There are some nice breakaway 'widgets' for moving between modalities, but it's an interface partly designed to make high-paying clients comfortable that their money is going toward hefty lease payments.

I was disappointed that the Scratch guys never got around to showing us Red Alert (I think that's the name of the app), which is currently shipping with Red. It's designed for evaluating and modifying images from the camera, both in the field and in post. Considering this was a Red event, I was a bit peeved that Assimilate turned the demo room into a Scratch event. Poor showing, boys.

The Big Takeaway

The presenter at the event (one of the owners of Off Hollywood Studios) made a point that I think is relevant to anyone creating pictures. He mentioned how the images coming off the sensor don't look all that pretty. He said the goal with a camera like Red is concentrate on latitude - don't clip highlights or shadows. Pretty is done in post, capturing as much dynamic range should be the objective. I think he's dead-on correct. But I don't think this is only true for the Red camera. In fact, this is especially true for DV or DV50 shooters.

Yes, you want good lighting and a talented DP is as critical as ever. And a talented DP will preserve as much detail in the image as possible.

Image Detail = Production Value

One ingredient to make your video look like not-video is to preserve your highlights and and don't let your shadows fall into total blackness.

UPDATE - Two quick notes:
  1. When I say the Scratch GUI looks like a Flame rip-off, I don't mean that disparagingly... just that, to me, it looks like Flame. It doesn't seem a friendly or approachable interface but rather is very deep and filled with identical pop-up style gray buttons.
  2. Don't confuse Image Detail with the "detail enhancement" option on many cameras. That option is as bad as turning on gain and should be avoided unless you're looking for a "video" look. And even then, that kind of sharpness can be added in post - so save it for post...

- pi
 Subscribe in a reader
|

How To Prep A FCP Sequence For Finishing @ Fini

Our clients generally bring their footage to us in one of two ways:

  1. They bring their camera originals which we redigitize.
  2. They bring their footage (usually DV) on a firewire drive and we begin finishing directly from those files.

Both methods have their challenges. For now, because I've had to write out these instructions to two clients in the past week, let's focus on Method #2. These techy instructions are specifically for shows cut on Final Cut Pro...

The end result: You'll create a new project with a new timeline that's exactly the same as your current timeline - only it points to newly copied media that's been trimmed to only the footage needed to playback your timeline. We'll include 15 frames of handles for each shot, so we can slip and slide 15 frames in either direction - if need be (no edit is ever truly locked).

Preparation

Because we use Apple's new Color software so heavily in our workflow, some preparation needs to go into this process that can be neatly classified as 'busy work'; all speed changes, time remaps, freeze frames, or jpeg / tiff files in your project must be rendered out and re-edited back into the sequence. Same thing with nested Motion or Livetype projects. On documentaries this is not an insubstantial amount of work. But currently, we have no choice - it's a limitation of the Color software, which is powerful enough to be worth the hassle.

Once that's done take a look at your timeline. When you edit do you "build up" your timeline, saving alternate takes in video tracks below the topmost, visible clip? If so, you need to play the role of a good Sous Chef and reduce your timeline down so it includes only the clips necessary to recreate your timeline. Everything else must go. To avoid confusion in the finishing session I suggest dropping everything down to V1. Then dedicate other tracks to specific elements... V2 for overlapping dissolves or composites, V3 & V4 for titles and graphics, V5 for the letterbox, etc...

Using the Media Manager

Once the timeline has been properly prepared, it's time to copy your footage onto the drive you'll be bringing to the finishing session. Don't do this directly from the Finder. Why? Final Cut Pro doesn't always like its media handled this way. Also, we want to reduce the number and size of files you're copying to the bare minimum. We only want the files referenced from your newly reduced timeline, and we only want 15 frames of 'handles' before and after each clip. To do that, follow these steps...

1. In your current project, in the Browser right-click on the current sequence you want to send to Fini.

2. Select "Media Manager"

3. Here's a screen shot of the settings to use inside the Media Manager:

unknown

4. Click on "Browse" under Media Destination. Navigate to the drive you'll bring to the finishing session put the files in a new folder "MEDIA_TO_FINI".

5. Before pressing OK recheck the following: 
  • The green "Modified" bar should be much shorter than the green "Original" bar. If not, something's probably wrong.
  • Be sure you are choosing the "Copy" function - nothing else, or things will go terribly wrong.

6. Click "OK"

7. A dialog will open asking you to name a new project which will reference this material. Give it a meaningful name, save it to the top level of the drive where you're putting the MEDIA_TO_FINI.

8. Let the machine run. Depending on the speed of your processor and how your drives are attached, expect this to take a while and the machine to be unavailable during this process. Maybe even a very long while. On a recent 70 minute doc this step took about 75 minutes, with FCP constantly updating as to what shot was being trimmed and copied.

Check Your Work

9. When finished, close all current projects, then open the newly created project on the drive you'll be bringing.

10. Open the timeline, select a shot in the timeline and press Command-9. Look at the file path for this clip and be sure it's pointing to the hard drive / folder you've set as the copy location. Double-check any speed changes, freeze frames, and graphics - ensuring they're all correct. You'll should watch the whole thing down.

11. You're done.

- pi

 Subscribe in a reader
|

ProResSD - On a G5? You might want to avoid it

After the results of the ProResSD tests I performed, I've started using it in situations where pristine quality wasn't a concern. And the more I use it the less likely it is I'll continue to use it.

This has nothing to do with its quality and everything to do with the fact that I'm still on a G5 (Dual 2.5).

If I do *anything* to ProResSD compressed images, the image quality drops to "Preview" - meaning I have to render before doing any outputs. After 18 months of working on this machine in which I can often have two 3-Way Color Correction filters, plus Broadcast Safe, with a crop or reposition and have everything playback at full quality with no rendering - having to force a render for even the slightest repo is driving me nuts.

I can safely say I'll be using ProResSD a lot less than I thought I would. Considering that a Quad-Core is in the near-term future I don't have the time or inclination to deal with the extra overhead forced upon me by this new codec.

If anyone has any experience using ProResSD on a Mactel, please drop a message in the Comments box. I'd love to know your experiences.

- pi

 Subscribe in a reader

|

ProRes SD Results (Finally)

2 weeks ago I finished my testing on the ProRes SD flavor of Apple's new codec. While many have tested the ProResHD variant, the SD variant hasn't been quite as scrutinized. Perhaps that's because hard drives have gotten so large and pipelines so wide that a 25 mb/s files isn't the bottleneck it used to be? Still, if you can save the space without giving up quality why not jump off the Uncompressed codec and onto the ProRes bandwagon when working at Standard Definition? To make an informed jump, we need some facts. What follows is the result of some of my "fact-checking" as I determine if ProRes SD is worthy as a 'finishing' codec and can withstand a common finishing workflow.

And if you want more info on the ProRes codec, here's the direct download of
Apple's ProRes white paper.

Judging Criteria: To judge if ProResSD was a "finishing" codec I decided I had to be able to cut, mid-shot, the original 10bit textless back into the 3rd Generation ProResSD Protection Master - as if I were creating an International Generic Master. And at the edit point it had to have no visible difference to both the human eye and the waveform/vectorscope. This is a test I know a fully 10bit uncompressed workflow could easily pass. And frankly, this is not a very challenging test even for an analog tape format like D-2 (assuming an all-digital environment). So my judging on ProResSD will be fairly harsh - it needs to be perfect.

Methodology:
Using a reality series I finished earlier this year as reference footage - I created a 2 minute test sequence comprised of interiors, exteriors, day, night, interview and run & gun situations. The footage was originally shot anamorphically on DVCPro, conformed in an Avid at 1:1 and then it was output to Digibeta for final finishing in our FCP finishing bay. I captured the footage via Decklink HD Pro SDI to 4 codecs:
  • 10bit Uncompressed
  • ProRes SD (High Quality)
  • ProRes SD (Standard Quality)
  • DV
The footage was then color corrected using Apple's new Color app, and rendered back its corresponding codec. The exception here was DV, which I "promoted" to a ProRes HQ sequence before sending it to Color and then treated as an HQ sequence from there on. I was curious if ProRes would further degrade the DV material with its inherit macroblocking issues. I color corrected the 10bit sequence first, for subsequent sequences I imported the corrections from the 10bit pass.

Simulating the worst-case scenario for a show being delivered to a network - I assumed the footage would be output and recaptured several times:
  • Textless Output
  • Textless Captured, Master Output
  • Master Captured, Protection Output
  • Protection Captured, International Generic created
Finally, I used the 10bit uncompressed textless as the base comparison, differencing it with the 'International Generic' of each of the other three codecs to identify the most challenging / degraded images.

Difference Tests (images will open in new windows):

  • Digibeta Capture: download image (2MB)
    I did this series of difference tests mostly from curiosity. It compares the 10bit Uncompressed to each of the other three codecs before any other processing. It gives an idea as to how much detail each codec throws away. If you've ever wondered why so many of us despise working with DV, every bit of detail you see in these tests is detail retained by the 10bit Uncompressed codec and thrown away by the DV codec.
  • Color Correct Output: download image (2MB)
    After rendering the color correction out from Color, I did another series of differences versus the 10bit render. I was looking for any obviously increased degradation that wasn't seen in the first set of Digibeta Capture difference tests. I don't see any. Color seemed to render them cleanly - especially the DV rendered out as ProResHQ which didn't seem to suffer any additional degradation, leaving open one interesting workflow possibility for those with constrained budgets and originating from DV.
  • 3rd Generation Tests: download image (2MB)
    After round-tripping from FCP to Digibeta 3 times, I again made a series of differences, this time between each codecs Textless and its 3rd generation - seeing how well it held up. The 10bit Uncompressed was rock-solid black, so I didn't bother to include it here.

Frankly, I was surprised how well the DV Promote workflow held up. After the initial hit during capture, the ProResSD didn't allow it to degrade any further. In my opinion, this is a viable workflow for DIY'ers who don't have SDI workflows available to them. But as you can probably see from the difference tests, the ProResSD is indeed lossy - but we already knew that, Apple doesn't make any claims otherwise. Which brings me back to where I started:

Conclusion

Q: Can a 3rd generation copy be visually distinguished when edited mid-shot into a 1st generation copy or can it be easily observed using a waveform monitor or vectorscope?

A
: The answer to both parts of that compound question is... When playing at speed, 1st Generation 10bit is indistinguishable from 3rd Generation ProResSD. I can't see the edit. By that standard ProResSD is indeed a finishing codec, even as we know there's been slight generational loss as observed in the difference tests.

But: When paused on identical frames and quickly toggling between 1st generation Uncompressed and the 3rd generation ProResSD - levels and chroma are rock solid steady, but there is a oh-so-slight softening of the image. It's slight enough that most my clients won't be able to see it. Heck, I barely see it. Though once I noticed it on the monitor and l looked back at my scopes, I could see a teeny softening of the trace. It wasn't evident in every shot, only those with heavy details (usually in the background). So...

ProsRes SD is an impressive codec. While only doubling the storage space of DV it gives 98% of the quality of Uncompressed. Good enough for finishing purposes? Yes. I would not use it for heavy compositing where every drop of detail is essential. Unlike the HD variant, which I've heard is rock-solid through (at least) 10 generations, the SD variant's 'lossy-ness' does exist after 3 generations.

And here's where the rubber meets the road: Will I be using it as my codec of choice? Not for network deliverables. I want my images as pristine as possible and with storage space so cheap, 25 MB/s isn't that big a deal anymore. But I
will use it for creating DVD, web deliverables, screening copies, etc - replacing 8bit uncompressed as my codec of choice for those elements. And on low budget projects without compositing needs - I'm sure there will be a few projects where I will advocate capturing ProResSD to use it from the first Assembly through the end to final Master.

UPDATE: If you're running on a G5, be sure to read
this follow-up post why Pro-Res isn't quite so thrilling on those machines.

- pi

 Subscribe in a reader

|

ProRes SD Update

In case anyone is playing along... I was about to re-do my portions of my workflow test when the dot upgrade to FCS2 rolled out. I put it on hold and have been busy with clients since then.

I have re-worked my testing workflow to ensure my results are reliable - but the holiday is upon us and I'm off the rest of the week. Next Monday or Tuesday I should be back on this side-project.

- pi

 Subscribe in a reader

|

ProRes SD in Practice

UPDATE: Testing has been completed. Different conclusions have been drawn. Final results are here.

One of the new workflows introduced by Apple in Final Cut Studio 2 is a lightweight codec called ProRes 422. According to Apple's ProRes White Paper:

Apple ProRes 422 is changing the rules of post-production. The combination of industry-leading image quality, low data rates, and the real-time performance of Final Cut Studio 2 makes ProRes 422 the ideal format to meet the challenges of today’s demanding HD production workflows.

If you read the White Paper the emphasis is almost entirely on HD, even though an SD variant ships with FCS2. After some testing I think I understand why...

ProRes422 SD seems quite lossy. After 3 generations I'm seeing a definite softening of details. It's graceful, similar to analog degradation in the more modern analog tape formats, but it's there. It's enough loss to say I don't consider it a finishing codec - I'll be staying uncompressed. For editors out there who ran digital analog component rooms - I'd compare it to D2 running through a digital switcher. I used to go 6-8 generations on that format in a well-designed edit bay. I didn't take ProRes SD that far - but I don't have much hope it would fare any better.

I'll be posting a full-blown review of ProResSD in the next two weeks - but one word of warning about a proposed workflow I've seen discussed online: Putting DV footage into a ProRes timeline (or "promoting" as you would into an Uncompressed 8-bit or 10-bit timeline) is a good way to give your footage an untimely death. I'll be re-testing my results to be sure, but for now I'd advise against it (especially if you plan on running it through Color).

 Subscribe in a reader

|

FCP's Multicam

This past summer I finally had my first opportunity to give Final Cut Pro's multicam feature an extensive workout - cutting (4) one-hour shows for TV One. From initial rough cut through Media Management and final delivery, I was eating and sleeping with this new feature and the overall experience was pleasing.

Before I dig into some of the nooks and crannies, I want to mention Ripple Training's Essentials of Multicam Editing and Advanced Multicam Workflow downloadable Quicktime tutorials. You can buy both for $30. If I didn't think they were worth the money, I wouldn't be linking to them. If you're like me and you had 45 minutes before the client walked in the door to become conversant with Multicam on FCP, these two tutorials will make that happen... and you'll have 10 minutes to spare.

The multicam feature has the bells and whistles a professional editor would expect: On-the-fly camera switching; video and audio independently switched (or simultaneously); 1-up, 4-up, 9-up, 16-up split screens; cameras can be repositioned in the source window; cameras grouped by timecode or in-point; and more.

I had two issues with the feature:
  • make_multi_tc
    When grouping cameras together there's no way to arrange the cameras so they'd appear together how I wanted. Since there were tape changes during the course of this shoot, I had to go to the 9-up display and scroll through the multiclip, option-dragging windows and rearranging the order of the clips. It was annoying, more than anything else.

  • Menu Item : Keyboard Layout
    After laying down my initial edit of the sequence, selecting my cameras and trimming the content, I'd start working in a hybrid mode; sometimes I'd be working in multicam selecting cameras, other times I'm doing traditional editorial (slipping, sliding, tweaking). Not a problem except to work efficiently in multicam you need to pull up the multicam keyboard - where common editorial operations are replaced by common multicam operations. Then I'd have to switch back to my custom keyboard for normal editing. I found myself constantly stumbling between the two layouts, never quite sure which one I was in. Again, it's more annoying than anything... takes me out of my rhythm.

Overall I was quite pleased with this feature. It worked as advertised and now with FCP version 5.1, Final Cut is extremely stable. Good show, Team FCP.

 Subscribe in a reader


|

Unity vs XSan

I had to take down the section of the Avid vs FCP article dealing with Unity. It seems my understanding of FCP SAN solutions is out-of-date. XSan seems to have had a real impact on large FCP installations. The FCP mailing list is buzzing as we're all being brought up to speed and I'll be going through some PDFs to better understand how it works.

I'm holding off wider distribution of the article until I'm sure its updated to fairly reflect current realities.

Geez. I had forgotten why I don't update this article very often... it's nerve-wracking trying to get it right.

- pi
|

Avid vs FCP 2006

It's finally up, the latest version of my Avid versus Final Cut Pro article. I originally wrote this article in 2002 and this is the second update to that article.

Also: I've been playing with a new plug-in that I think I'm going to do a quick write-up on... dealing with garbage mattes and rotoscoping from within FCP.

Stay tuned...

- pi
|