After what has seemed like an eternal winter, the weather here in Colorado is finally warming up. Moreover, we seem to have shaken the 60 mph winds that made photography difficult last week. I finally got out to attempt a project that I’d been previsualizing for some time; lightpainting the Siamese Twins formation in Garden of the Gods.
I’ve photographed this formation before during the daytime; it’s a popular spot to catch the juxtaposition of the twin rock towers with the summit of Pikes Peak between them. But I’d never hiked to it at night.
Lightpainting is a technique whereby you artificially illuminate your subject with a flashlight or lantern. This technique enables you to control the exact placement of light in the scene and you can use it to selectively illuminate subjects of interest. I headed up to the Siamese twins with my gear in a Think Tank “Streetwalker Pro” bag. I had my D3s, 16-35/4, 24-70/2.8 and a 70-200/2.8 VR II. I also had my Gitzo tripod and a couple of strong flashlights. I reached the formation about 20 minutes after sundown and I set up.
I’m pleased to announce the release of my newest eBook, The Photographer’s Guide to Digital Landscapes. I wrote this book to provide photographers with a modern-day assessment of the fundamental techniques for capturing fabulous landscape images. Of course, some of these techniques apply to everyday photography, too. I’ve got a lot of books on landscape photography, and they are all very good. But many of the “classics” don’t have anything to say about modern DSLR photography– even the books that say “updated for digital.”
So, I present to you, The Photographer’s Guide to Digital Landscapes. A three-part book that covers:
Back in the film days, workflow was pretty easy. Shoot a roll of slides, send it in for processing, and then put the results on my light table and pick out the select few images to scan and print. With film, many creative decisions were made for you. Each film type had a particular look and feel to it; the color palettes and contrast responses varied between emulsions. With film, what you saw in the slide was pretty much what you got out of a scan. Moreover, with film, I shot far fewer images than I do now with digital. The tangible cost component of film shooting kept the number of images down for most casual shooters. Shoot a couple of 36-exposure rolls, pick the keepers, scan ’em and you’re done. Scanning slides was a tedious enough process that I really only chose the best images to scan.
Today, we shoot hundreds or even thousands of images with our DSLRs and high-capacity memory cards. Transferring these images to your computer only takes a few minutes, and there is no agonizing wait for film to return from the lab or the scanner to scan the slide/negative. That means we’re quickly filling up our hard drives with images that may have never even made it into a slide sleeve in the film days. Moreover, unless you shoot only JPEG, you are now the photo lab. Instead of choosing a film type to get a particular “look,” we have to process our own RAW files to achieve a desired result. The prospect of processing thousands of files is intimidating, to say the least.
If you shoot for your own personal pleasure, I’d like to recommend simplifying your workflow. Don’t put yourself into a position where you must process EVERY SINGLE FILE. Simply put, you don’t need to. Start by trying to get things right in your camera. Choose the right white balance and get the exposure right. Use camera settings that are appropriate for your subject– don’t shoot a portrait session using “VIVID” mode; you wouldn’t shoot a wedding with Velvia film, right? Once you’re back from your shoot. be picky. Choose the select few images that you really want to share, and only process those. Not only will you save time in post, but your friends and family will appreciate that you didn’t bombard them with every variant of every shot in a 100MB email bomb!
For more on my workflow and how I have integrated modern tools with Capture NX2, sign up for my NEF-Centric workflow workshop!
It seems like every six months or so, you see a newer, faster microchip processor hitting the market. My original Mac SE, which cost me $1800 (with educational discount) in 1989, ran at a whopping 8 megaHertz. Today, we have dual-core, quad-core, and even octal-core machines running from 2-3 gigaHertz. That’s fast. Memory is faster, graphics cards are faster, everything is FASTER! Except, that is, for your hard drive. Yes, while hard drives have gotten faster over the years, they now represent the weak link in your ability to load and process data. Let’s face it, most computers are not processor-limited anymore. Any halfway decent machine is going to run fast, and unless you are into hard-core video rendering, even the lower end of the processor line-up will be very good.
But back to those hard drives. They rotate. They have moving parts. They can fail. My wife’s hard drive failed after barely a year in a brand-new iMac. The good news is that I’ve seen the future, and it is amazing. Solid-state drives (SSDs) are computer equivalents of the flash memory we use in our cameras. No moving parts, and damn near instant read times. SSD’s have been out for a few years; I first saw them showing up as an expensive option in the Apple MacBook Air. I had no idea why anyone would want to pay such a premium for a drive that had less capacity than the standard HDD. Until I saw the light.
First of all, consider how we use our laptops. For many of us, the laptop is the “travel” computer. We have a primary desktop computer at home to do our heavy-lifting. A clean install of my OS (in my case, Mac OS 10.6) with my critical applications runs about 45GB. Even with a 120GB drive, that still leaves me a good bit of free space. On my laptop, I don’t need to store every NEF I’ve shot or my entire music collection (hello, iPhone), so a 128GB drive is actually plenty. Moreover, I usually travel with a 250GB FW800 drive to store my images on, anyway. Continue reading Future, thy name is SSD→
One of the things I like about shooting in RAW is that I have the ability to override my in-camera settings during post-processing. The RAW safety net is tremendously useful, even if you get most things “right” on a shoot. One thing I don’t like, however, is using software that automatically throws away my in-camera settings because it thinks it is smarter than me. When I preview my images, I want to see what I had shot in-camera, even if I got it “wrong” (I like to learn from my mistakes).
I’m mostly talking about image browsers, here. All these products that are “RAW saavy.” That’s really just code for “built-in RAW converter” that will ignore all your in-camera settings. The problem with multiple RAW converters is that each one works with its own set of instructions. If you use Browser “A” to view your files, then process them in Application “B”, when you go back to Browser “A,” you won’t see any changes in your image previews. This conundrum is why we’re seeing a big push towards “soup to-nuts” products like Adobe Photoshop Lightroom and Apple Aperture.
Take for instance, the scenario where you shoot NEFs using different Nikon Picture Controls. By default, you can make four different core settings in your camera:
When you look at the LCD preview on your camera, you can tell the difference between the images. Neutral is low-contrast and low-saturation, while vivid is high-contrast and high-saturation. And monochrome, well it’s black and white.
Now consider what happens when you download those same four images and preview them in a browser that has its own RAW engine: