I only have time for a quick post tonight...
The last 3 weeks (and for the next month) I’ve had the opportunity to work on JL Cooper’s MCS- series of hardware controllers. Last week I posted on the Color-L mailing list that the customization software for the Spectrum colorist control surface basically... well, sucks. It’s buggy and it doesn’t have half the controls that the Eclipse software has. I was very disappointed. My buddy Mitch responded that he was told at NAB the Eclipse software would drive those panels.
The thought hadn’t occured to me. On Monday I installed the Eclipse software (instructions here) and it worked. I imported my keyset and that worked as well! Joy, oh happy day.
One small tweak had to be made since the Eclipse does have one extra button that the Spectrum doesn’t.
So Spectrum users - get out there and behold the power of a fully functioning control surface. I promise, you won’t be disappointed!
Subscribe in a reader
I'm a big fan of colorist control surfaces. My company invested in the JL Cooper EclipseCX. I'm approaching the 6 month mark of ownership and I've found that it's not without its own set of quirks and annoyances. Prime among those annoyances is the fact that Apple's Color natively offers only limited support for this control surface. From my original review:
Important keyboard commands are missing, as well as the missing Master Gain/Gamma/Lift controls. Moving quickly between shots using the transport buttons is too unresponsive. When copying and pasting grades there are too few buttons chasing too many controls. . . Keyframe management is clunky and should work better if placed elsewhere on the panel. . . Overall, I think the Color team really should to take another look at their control surface support for the JL Cooper and tidy things up a bit.
Here's the bad news: The software is an initial PITA to setup. Royally. Unpredictably. Frustratingly. PITA.
I've complained mightily to the JL Cooper Powers That Be about the nonsensical installation problems that surround getting the Eclipse software up and running for the first time. Why does it take so long? I have no fracking idea. But I've installed this software a dozen times in two different locations and it generally takes about 45 minutes - and I (think) I know what I'm doing.
But I've finally developed a few methods for making the install problem as painless as possible. Here's how I do it, in its mind-numbing detail:
Disclaimer: The current b6 software is just that, beta. It's available for download off their website. Here's the link. Like me, use at your own risk. I am not employed or any way associated with JL Cooper other than as an end-user. If you want to bitch at them, please do so. Here's their contact page. If you, however, want help with setting up the software and ask for help in a nice manner - I'll be happy do so either via the comments for this posting or, preferably, on the Yahoo Color-L mailing list (the latter is the preferred choice, since it can take me a few days to respond on the website).
The first time you do this - set aside a few hours. Don't try to squeeze this in 20 minutes before a session - you're asking for trouble. Let's start:
- Begin by making sure your control surface is talking to Color using the methods outlined in the Color manual. Don't bother with the Eclipse software until you've done this step. This will ensure you don't have other networking issues getting in the way of your install. Once it's working, write down the IP address and port you've entered into Color.
- Download the JL Cooper software from this page.
- Have you ever installed any version of JL Cooper Eclipse or MCS software before? If so, you must absolutely uninstall it using the provided uninstaller. Then go into ~/Library/Preferences and delete the .plist file associated with the JL Cooper software. If you leave that prefs file in there it'll destroy you. And it doesn't seem to be removed by the uninstaller. Removing this file clears up 80% of the issues I've had in the past. You should do the same.
- Restart the machine.
- Install the JL Cooper software
- Go into System Preferences > Universal Access and click Enable access for Assistive Devices.
- Restart the machine.
- Open the EclipseCX software. Go into prefs and enter the
networking info that you wrote down in Step 1.
- Import the Color keyset from ~/Applications/EclipseCX Software/keysets/. You've now loaded the keyset that talks to Color. Modifications here effect how the Eclipse "talks" to Color.
- Test this software by moving a trackball and spinning some knobs. You should see the software interface respond. If not: Quit out of the software, turn off the control surface. Turn it back on. Log out of your account. Log back in. Open the EclipseCX software and re-test by pushing buttons, moving knobs, etc. It should be working now. If not, restart the computer and try again. NOW it should be working. If not, make sure the EclipseCX software prefs match the network settings on the Eclipse (which it should if you managed to have the control surface talking with Color directly.)
to the menu setting Actions > Set Ethernet Port for Color Keyset and select the top choice.You can go with the default port number. I find that 61000 is a number that works better for me. It's rather arbitrary. Write this number down, we'll need it in a moment. Keep in mind, you might need to change it later, if things don't work so well.
- Quit from the Eclipse software. If you're feeling confident, you may launch Color and proceed to the next step. If you want to be safe: Power cycle the Eclipse, the log out / log back in. I find this tends to clear things back to a normal state and increases my chances for success on the next step.
- Launch Color. Change the control surface Ethernet setting to: 127.0.0.1 Set the Port to match what you entered two steps above. If you're lucky - the EclipseCX is now talking to Color. If you're not lucky, do the 'normalization' tasks in the previous step. If it's still not working, reboot the machine.
Does all that seem like a pain? It sure does to me. Drives me nuts. Here's the upside: Once I have it working, it's pretty much bullet proof. It doesn't go down. I've had it working for weeks at a time... until I install the next Beta version and I have to go through this whole routine again! It seems at least a few of the Tangent users aren't quite so lucky (cheap shot, I know... but Tangent users are a mighty quiet lot so I'll take it when I can get it).
Next time: I'll take you through how to customize the control surface and why you should bother. But here's a payoff until then - grab this file. It's the Color keyset I created for the b6 version of the software. It's quite different than what JL Cooper ships, but I think much more useful for the working professional. Be sure to read the pdf with it, it describes how I've set up the panel.
Subscribe in a reader
In this previous post I lamented how Apple seemed to be dragging its heels on providing BluRay authoring tools in its Pro Apps suite.
I got at least one fact wrong: Compressor 3 does export for BluRay.
Where did I go to find this out? Adobe!
Specifically, the DAV TechTable blog - which is filled with useful how-to's on BluRay authoring and I've placed into my RSS reader (now that I'm an owner of the Adobe Production Suite CS3 bundle, which supports BluRay authoring on the Mac).
Here's the post which gives explicit instructions on how to export from Compressor for BluRay authoring in Encore DVD. It's not a built-in preset in Compressor, so you'll want to build and save these settings as a Custom Preset.
If you're a glass half empty person, you've got to wonder why this setting isn't shipping as a preset in Compressor. Is it an ominous sign of Apple trying to keep its boot on the neck of BluRay? If you're a glass half full person, hopefully this is a positive omen that the next version of Final Cut Studio will have much more explicit support for BluRay authoring.
Subscribe in a reader
This post is a shameless plug for a great workshop being held in NYC on Saturday May 17th. If you know anyone who might be interested in the following, please forward them any of the URLs listed below.
One of the hot workflow topics these days is "tapeless acquisition". Whether it be P2 cards for your HVX-200 or those little SD cards for a Red camera, managing that data while on-set has become a valuable asset. Frequently called the "Data Wrangler", the person who manages the off-loading, verification and subsequent re-initializing of these cards is a position of tremendous responsibility. Given that it's a relatively new crew assignment, training opportunities are few and far between - while the stakes in getting it wrong can be hazardous to one's career development.
If you live in New York City metro area, next Saturday May 17th The Moving Pictures Collective (Mopictive) is offering a Tapeless Acquisition Workshop. By the end of the day you'll walk away with a system for data management that can be applied to any tapeless shoot. It's being taught by Michael Vitti - the Fearless Leader of Mopictive who has extensive experience with "data wrangling" - and Jamie Hitchings - an Apple Certified instructor - who will walk the attendees through the entire Log & Capture process. Special Guest is a great guy I've known for many years, Michael Woodworth of Divergent Media, developer of the software app, ScopeBox. He'll be talking up scopes (how read them, how to use them, and why you need them) and monitors - a great ancillary skill for anyone who's trying to break in onto the set.
Here's the rub - signups have been light. If a few more people don't get signed up before next Tuesday or Wed, the event will be cancelled. Keep in mind, class size is limited to 10 people. This is nearly a one-on-one workshop. You'll have full access to the instructors and plenty of time to get all your questions answered. You'll learn the theory which can be applied to any tapeless situation as well as practical applications that'll allow to immediate implementation of that theory.
You can find out more details about the workshop here.
You can sign up here. Price is $300.
Full Disclosure: I am the Treasurer of Mopictive (which is a DBA of the New York Final Cut Users Group and also a certified NYS 501c3 not-for-profit). Over 50% of the proceeds will go to Mopicitive and furthering its mission to the training of Digital Storytellers.
Subscribe in a reader
UPDATE 2: It's been commented to me that my opening line, "Color is broken" is a bit extreme. I'd agree - if you work in a purely progressive frame workflow or a purely interlaced workflow that involves no resizing, distorts, or anamorphic flags - Color is fine. For the rest of us... I think it's broken. (In fact, I had a meeting this afternoon where I made clear my preference for progressive with no mixed formats in a single timeline)
But absolutely - decide for yourself if this bug breaks Color for you.
UPDATE 1: More on the Geometry Room issue I mention in the original posting - A posting on the Apple message board mentioned that he uses the Geometry Room to zoom in on skin tones to check them in the scopes to see that they properly lay on the skin tone line (something I first saw suggested in the Ripple Training Color tutorials ). He'd then click the reset button in the Geometry and move on to his next task. In my own testing I've confirmed that this is enough to force Color into flame blending mode of interlaced footage. Pressing reset doesn't help. Once a shot is flagged as having touched the Geometry Room - that shot is toast.
If you have an external CRT hooked up to your system (you do, don't you?) it's easy to confirm that this is happening. Just park on a frame that exhibits the typical jitter of interlaced footage (most evident when there's lots of motion on the screen). Go into the Geometry Room and change a setting. The jitter disappears. Color has suddenly decided to frame-blend this shot. Click Reset. The jitter doesn't re-appear (like it would in previous versions of Color). The shot is still flagged for flame-blending. Switch to a new grade. Still no jitter. Whatever else is happening, switching grades doesn't fix the problem.
I haven't found a workaround to this particular problem.
Color is broken.
But before I get to the specifics, some quick background.
There's an old problem that dates back to Color's Final Touch days, before the Apple purchase. In those days (and to a certain extent, these days as well) you had to be very very careful how you handled interlaced footage. Color was originally designed for high-end Digital Intermediate work - which means it was optimized to for a film-based progressive RGB workflow.
It wasn't until after development was well under way before the original management team decided to open up the software to High Def and Standard Def formats. In doing so, they never really solved how to get Color to handle interlaced footage if that footage had to be blown up, shrunk down, or repositioned. If you "repo'ed" a shot and that shot was recorded on an interlaced codec, all you got back was mush. That "mush" ranged from slightly softening the image to horribly destroying the image, depending on the nature of the content.
To get semi-technical: The problem exhibits itself as really bad frame blending.
When Color was released, Apple decided to avoid the whole "mushy image" problem by having Color ignore all Motion Tab effects and let FCP handle that portion of the job. It was a smart way to address the issue. And it worked. With emphasis on the past tense.
Interlace footage is broken again in Color 1.0.2.
In my testing last week I found that when it comes to handling Standard def footage there was only one way to avoid the "mushy image syndrome". That's by being sure both these are true for any project I send to Color:
1. No repo's, distorts, or anamorphic flags on the footage.
2. The FCP timeline frame size must be a preset that exists in Color. For instance, 960x720 always renders with frame blending - no matter what and regardless of the previous Condition #1.
(Note: A recent posting on the Apple Discussion Board suggests that even doing a "repo" in the Geometry Room and then canceling it out is enough for Color to frame blend its renders)
What does this mean to those of us still working in the SD world?
It means we now have to go through our timelines and strip all motion effects from our timelines before color correcting. And then add them back one-by-one after color correcting.
This is NOT progress. It's been a year since I've had to do this and I had hoped we had put this behind us.
For all the nifty improvements in Color 1.0.2 - for me and my clients - this workflow is not worth the pain. But there's a question that, after a weekend of pondering, I haven't found an answer:
Is it safe to reinstall just Color and upgrade it only to Color 1.0.1?
The Color 1.0.2 upgrade happened in conjunction with the entire Final Cut Studio 6.0.2 upgrade. And that upgrade contain some very important bug fixes within Final Cut Pro.
So do I add a half day to every job to handle the new bugs in Color 1.0.2? Or do I add a half day to every job because Final Cut Pro 6.0.1 loses my renders and I have spend 4 hours re-rendering?
My head's spinning here. And my favorite people in the world who've I've never met (the entire FCP and Color teams) are responsible for it.
Is this the perfect Monday morning blog post, or what?
Subscribe in a reader
Second, you can save everything you've done to a shot - Primaries, Secodaries, ColorFX, and Geometry Room - collectively as a single file. Color calls those "Grades". I use grades extensively. Organizing grades tends to be fairly straight forward. Park on the shot whose grade you want to save, go into the Setup Room (under the Grades tab) and type in the name of your grade. Typically you'd name it something meaningful, "attorney_v001" for instance.
When you're finished with your show you might have a list of saved grades that looks something like this:
You'll notice I have all my grades grouped together by names. eric_ruddy_couch has several variations followed by eric_2shot, also with several variations. And so on... resulting in all my Eric grades staying grouped together - and then sub-grouped by scene, angle, etc. No rocket science here. Pretty basic stuff.
But what happens when the number of grades you want to save for retrieval quickly expands beyond your ability to come up with meaningful descriptive names?
This happened to me recently on doc that had 3 main subjects - but also a few dozen repeating interviews . There was no way I was going to individually name each and every setup. But at the same time, I needed a way to quickly find a saved grade for each subject.
I started by (1) switching the Grade bin to Icon view and (2) allowing Color to Autoname my grades.
Color's auto-naming system is less than useful. As you'll see in the following image Color uses a Date/Timestamp to generate names (thus the need to work in Thumbnail View):
And yet, in the midst of its generic naming style how did I keep my grades organized by location and sub-grouped by person?
The answer: I used the Unix-style file name quick-fill feature that can be accessed in most Save dialog boxes on OS X.
Here's how it works in this context:
1. After switching into the Grade bin, decide where you want the Grade to show up inside your bin. If you want it at the end of the bin, then simply accept the auto-generated name from Color. It'll be placed at the end of the line.
2. If you want to place a Grade specifically next to another thumbnail simply highlight - DON'T double-click - just highlight / single-click the Grade. The name of that grade will automatically fill the File Name box.
3. Move your cursor to just before the .colorgrade extension and append it with a number. I usually start with the number 2 (applause, applause), from there I'll increment upwards.
If I want to sub-group I'll append my numbers with letters so that datetimestamp2a falls between datetimestamp2 and datetimestamp3.
The other side-benefit of this naming style - it's fast.
I'm not sure, but if this post reads confusing post a comment and I'll try to clear it up.
Subscribe in a reader
It's written in a diary format, since I found that my perception of the device changed as I used it and became more proficient on it. I think it'll help mouse-driven Color'ists better understand what the transition to the control surface was like.
I've got a few blog postings on Color workflow that have been bottled up as I was focused on (1) working on paid jobs and (2) writing the review.
Subscribe in a reader
I'm teaching a color correction class in a few weeks. If you're interested I suggest you sign up now - it's a small class size (20 enrollees, max).
I kind'a hate the name of this class, since I don't consider myself a Master - just someone who has taken a keen interest in the topic and pursues it professionally. This class is a full day seminar covering the theory behind video-based color correction techniques and then the application of those techniques to Final Cut Studio 2.
This seminar is a collaboration between myself, Mopictive (a 501(c)3 non-profit (I'm a board member)), and Manhattan Edit Workshop (Jamie Hitchings, who is an Apple-Certified instructor and will cover material contained in the Apple Pro Series book Advanced Techniques and Color Correction in Final Cut Pro). It's a jam packed day. I last did this class in the Spring and it was pretty well received. This time around I'm going to add more material on properly setting up lighting as well as providing a list of online retailers to help you execute a lighting plan.
Cost: $300 with 50% of the proceeds going to Mopictive (the NY Final Cut Pro User Group) and the remaining split between the facility providing the equipment (every enrollee gets their own workstation) and the instructors. You can sign up over at Manhattan Edit Workshop's website.
Sign-up: Call Amber 212-414-9570
Place: MEWShop, November 03, 10a - 5p
Subscribe in a reader
I attended the "Red Event" last night at Tekserve.
It was a generally uncomfortable event in which 150 people were jammed onto their showroom floor with inadequate air conditioning (Tekserve is always on uncomfortable place to shop) and stood for an hour. It seemed most people watched the event from screens throughout the store and the tallest people in the room had been given priority access to the first row, blocking everyone's line-of-sight... chairs would have been better.
I'm not going to go into Red workflow specifics because so few people have access to the Red camera. The people that are now shooting Red have workflows that are far beyond the scope of the clients I choose to serve. In a few more months we'll be able to test and refine a Red workflow "for the rest of us". But Red is an amazing technology and it was great to see the owners of Red #6 & #7 presenting to the NYC community.
Here are some of my impressions:
- Red should be hugely desirable to the Fini client base. It's affordable, accessible, scalable, and future-proof. It's a disruptive technology an order of magnitude larger than Final Cut Pro was disruptive. It will put a lot of people out of work... but give opportunity to far more people.
- Red is a complex workflow - largely because of its scalability. There will be several unique and distinct workflow's for different deliveries. Some purists will rail against the DV-crowd taking up this camera... they will argue that everyone should be delivering 4K all the time... they will be wrong. But the clients they serve will also feel the same way, so there's no need to worry that the Red camera will bring us all together in a Kumbaya / We Are The World oneness.
- The Red team isn't telling how many cameras are reserved, only that the number is in the thousands (which I take to mean more than two thousand). Compare that to the number of Vipers and Dalsa's out in the field shooting today - that's as if Apple would have sold 10 million iPhones by November, it's a crazy-big number.
Also showing at Tekserve last night was Scratch - a high-end software-based color correction app. I was intrigued by its power, flexibility, and depth. And unlike Color, it can read the RedCode directly - no need to transcode to some intermediate codec like ProRes. But at $50k a seat - it's not for my clients. It's priced for facilities running the Autodesk products (Flame / Smoke). In fact, the GUI looks like Autodesk funded the project. It's a total and complete Flame rip-off. There are some nice breakaway 'widgets' for moving between modalities, but it's an interface partly designed to make high-paying clients comfortable that their money is going toward hefty lease payments.
I was disappointed that the Scratch guys never got around to showing us Red Alert (I think that's the name of the app), which is currently shipping with Red. It's designed for evaluating and modifying images from the camera, both in the field and in post. Considering this was a Red event, I was a bit peeved that Assimilate turned the demo room into a Scratch event. Poor showing, boys.
The Big TakeawayThe presenter at the event (one of the owners of Off Hollywood Studios) made a point that I think is relevant to anyone creating pictures. He mentioned how the images coming off the sensor don't look all that pretty. He said the goal with a camera like Red is concentrate on latitude - don't clip highlights or shadows. Pretty is done in post, capturing as much dynamic range should be the objective. I think he's dead-on correct. But I don't think this is only true for the Red camera. In fact, this is especially true for DV or DV50 shooters.
Yes, you want good lighting and a talented DP is as critical as ever. And a talented DP will preserve as much detail in the image as possible.
Image Detail = Production Value
One ingredient to make your video look like not-video is to preserve your highlights and and don't let your shadows fall into total blackness.
UPDATE - Two quick notes:
- When I say the Scratch GUI looks like a Flame rip-off, I don't mean that disparagingly... just that, to me, it looks like Flame. It doesn't seem a friendly or approachable interface but rather is very deep and filled with identical pop-up style gray buttons.
- Don't confuse Image Detail with the "detail enhancement" option on many cameras. That option is as bad as turning on gain and should be avoided unless you're looking for a "video" look. And even then, that kind of sharpness can be added in post - so save it for post...
Subscribe in a reader
- They bring their camera originals which we redigitize.
- They bring their footage (usually DV) on a firewire drive and we begin finishing directly from those files.
Both methods have their challenges. For now, because I've had to write out these instructions to two clients in the past week, let's focus on Method #2. These techy instructions are specifically for shows cut on Final Cut Pro...
The end result: You'll create a new project with a new timeline that's exactly the same as your current timeline - only it points to newly copied media that's been trimmed to only the footage needed to playback your timeline. We'll include 15 frames of handles for each shot, so we can slip and slide 15 frames in either direction - if need be (no edit is ever truly locked).
PreparationBecause we use Apple's new Color software so heavily in our workflow, some preparation needs to go into this process that can be neatly classified as 'busy work'; all speed changes, time remaps, freeze frames, or jpeg / tiff files in your project must be rendered out and re-edited back into the sequence. Same thing with nested Motion or Livetype projects. On documentaries this is not an insubstantial amount of work. But currently, we have no choice - it's a limitation of the Color software, which is powerful enough to be worth the hassle.
Once that's done take a look at your timeline. When you edit do you "build up" your timeline, saving alternate takes in video tracks below the topmost, visible clip? If so, you need to play the role of a good Sous Chef and reduce your timeline down so it includes only the clips necessary to recreate your timeline. Everything else must go. To avoid confusion in the finishing session I suggest dropping everything down to V1. Then dedicate other tracks to specific elements... V2 for overlapping dissolves or composites, V3 & V4 for titles and graphics, V5 for the letterbox, etc...
Using the Media ManagerOnce the timeline has been properly prepared, it's time to copy your footage onto the drive you'll be bringing to the finishing session. Don't do this directly from the Finder. Why? Final Cut Pro doesn't always like its media handled this way. Also, we want to reduce the number and size of files you're copying to the bare minimum. We only want the files referenced from your newly reduced timeline, and we only want 15 frames of 'handles' before and after each clip. To do that, follow these steps...
1. In your current project, in the Browser right-click on the current sequence you want to send to Fini.
2. Select "Media Manager"
3. Here's a screen shot of the settings to use inside the Media Manager:
4. Click on "Browse" under Media Destination. Navigate to the drive you'll bring to the finishing session put the files in a new folder "MEDIA_TO_FINI".
5. Before pressing OK recheck the following:
- The green "Modified" bar should be much shorter than the green "Original" bar. If not, something's probably wrong.
- Be sure you are choosing the "Copy" function - nothing else, or things will go terribly wrong.
6. Click "OK"
7. A dialog will open asking you to name a new project which will reference this material. Give it a meaningful name, save it to the top level of the drive where you're putting the MEDIA_TO_FINI.
8. Let the machine run. Depending on the speed of your processor and how your drives are attached, expect this to take a while and the machine to be unavailable during this process. Maybe even a very long while. On a recent 70 minute doc this step took about 75 minutes, with FCP constantly updating as to what shot was being trimmed and copied.
Check Your Work9. When finished, close all current projects, then open the newly created project on the drive you'll be bringing.
10. Open the timeline, select a shot in the timeline and press Command-9. Look at the file path for this clip and be sure it's pointing to the hard drive / folder you've set as the copy location. Double-check any speed changes, freeze frames, and graphics - ensuring they're all correct. You'll should watch the whole thing down.
11. You're done.
Subscribe in a reader
This has nothing to do with its quality and everything to do with the fact that I'm still on a G5 (Dual 2.5).
If I do *anything* to ProResSD compressed images, the image quality drops to "Preview" - meaning I have to render before doing any outputs. After 18 months of working on this machine in which I can often have two 3-Way Color Correction filters, plus Broadcast Safe, with a crop or reposition and have everything playback at full quality with no rendering - having to force a render for even the slightest repo is driving me nuts.
I can safely say I'll be using ProResSD a lot less than I thought I would. Considering that a Quad-Core is in the near-term future I don't have the time or inclination to deal with the extra overhead forced upon me by this new codec.
If anyone has any experience using ProResSD on a Mactel, please drop a message in the Comments box. I'd love to know your experiences.
And if you want more info on the ProRes codec, here's the direct download of Apple's ProRes white paper.
Judging Criteria: To judge if ProResSD was a "finishing" codec I decided I had to be able to cut, mid-shot, the original 10bit textless back into the 3rd Generation ProResSD Protection Master - as if I were creating an International Generic Master. And at the edit point it had to have no visible difference to both the human eye and the waveform/vectorscope. This is a test I know a fully 10bit uncompressed workflow could easily pass. And frankly, this is not a very challenging test even for an analog tape format like D-2 (assuming an all-digital environment). So my judging on ProResSD will be fairly harsh - it needs to be perfect.
Methodology: Using a reality series I finished earlier this year as reference footage - I created a 2 minute test sequence comprised of interiors, exteriors, day, night, interview and run & gun situations. The footage was originally shot anamorphically on DVCPro, conformed in an Avid at 1:1 and then it was output to Digibeta for final finishing in our FCP finishing bay. I captured the footage via Decklink HD Pro SDI to 4 codecs:
- 10bit Uncompressed
- ProRes SD (High Quality)
- ProRes SD (Standard Quality)
Simulating the worst-case scenario for a show being delivered to a network - I assumed the footage would be output and recaptured several times:
- Textless Output
- Textless Captured, Master Output
- Master Captured, Protection Output
- Protection Captured, International Generic created
Difference Tests (images will open in new windows):
Capture: download image
I did this series of difference tests mostly from curiosity. It compares the 10bit Uncompressed to each of the other three codecs before any other processing. It gives an idea as to how much detail each codec throws away. If you've ever wondered why so many of us despise working with DV, every bit of detail you see in these tests is detail retained by the 10bit Uncompressed codec and thrown away by the DV codec.
- Color Correct
After rendering the color correction out from Color, I did another series of differences versus the 10bit render. I was looking for any obviously increased degradation that wasn't seen in the first set of Digibeta Capture difference tests. I don't see any. Color seemed to render them cleanly - especially the DV rendered out as ProResHQ which didn't seem to suffer any additional degradation, leaving open one interesting workflow possibility for those with constrained budgets and originating from DV.
- 3rd Generation
After round-tripping from FCP to Digibeta 3 times, I again made a series of differences, this time between each codecs Textless and its 3rd generation - seeing how well it held up. The 10bit Uncompressed was rock-solid black, so I didn't bother to include it here.
Frankly, I was surprised how well the DV Promote workflow held up. After the initial hit during capture, the ProResSD didn't allow it to degrade any further. In my opinion, this is a viable workflow for DIY'ers who don't have SDI workflows available to them. But as you can probably see from the difference tests, the ProResSD is indeed lossy - but we already knew that, Apple doesn't make any claims otherwise. Which brings me back to where I started:
ConclusionQ: Can a 3rd generation copy be visually distinguished when edited mid-shot into a 1st generation copy or can it be easily observed using a waveform monitor or vectorscope?
A: The answer to both parts of that compound question is... When playing at speed, 1st Generation 10bit is indistinguishable from 3rd Generation ProResSD. I can't see the edit. By that standard ProResSD is indeed a finishing codec, even as we know there's been slight generational loss as observed in the difference tests.
But: When paused on identical frames and quickly toggling between 1st generation Uncompressed and the 3rd generation ProResSD - levels and chroma are rock solid steady, but there is a oh-so-slight softening of the image. It's slight enough that most my clients won't be able to see it. Heck, I barely see it. Though once I noticed it on the monitor and l looked back at my scopes, I could see a teeny softening of the trace. It wasn't evident in every shot, only those with heavy details (usually in the background). So...
ProsRes SD is an impressive codec. While only doubling the storage space of DV it gives 98% of the quality of Uncompressed. Good enough for finishing purposes? Yes. I would not use it for heavy compositing where every drop of detail is essential. Unlike the HD variant, which I've heard is rock-solid through (at least) 10 generations, the SD variant's 'lossy-ness' does exist after 3 generations.
And here's where the rubber meets the road: Will I be using it as my codec of choice? Not for network deliverables. I want my images as pristine as possible and with storage space so cheap, 25 MB/s isn't that big a deal anymore. But I will use it for creating DVD, web deliverables, screening copies, etc - replacing 8bit uncompressed as my codec of choice for those elements. And on low budget projects without compositing needs - I'm sure there will be a few projects where I will advocate capturing ProResSD to use it from the first Assembly through the end to final Master.
UPDATE: If you're running on a G5, be sure to read this follow-up post why Pro-Res isn't quite so thrilling on those machines.
I have re-worked my testing workflow to ensure my results are reliable - but the holiday is upon us and I'm off the rest of the week. Next Monday or Tuesday I should be back on this side-project.
One of the new workflows introduced by Apple in Final Cut Studio 2 is a lightweight codec called ProRes 422. According to Apple's ProRes White Paper:
If you read the White Paper the emphasis is almost entirely on HD, even though an SD variant ships with FCS2. After some testing I think I understand why...
Apple ProRes 422 is changing the rules of post-production. The combination of industry-leading image quality, low data rates, and the real-time performance of Final Cut Studio 2 makes ProRes 422 the ideal format to meet the challenges of today’s demanding HD production workflows.
ProRes422 SD seems quite lossy. After 3 generations I'm seeing a definite softening of details. It's graceful, similar to analog degradation in the more modern analog tape formats, but it's there. It's enough loss to say I don't consider it a finishing codec - I'll be staying uncompressed. For editors out there who ran digital
I'll be posting a full-blown review of ProResSD in the next two weeks - but one word of warning about a proposed workflow I've seen discussed online: Putting DV footage into a ProRes timeline (or "promoting" as you would into an Uncompressed 8-bit or 10-bit timeline) is a good way to give your footage an untimely death. I'll be re-testing my results to be sure, but for now I'd advise against it (especially if you plan on running it through Color).