Quite a lot, when I deliver my photos to customers, I get asked on what DPI are they. And after explaining again and again that parameter has nothing to do with the quality of a photo, I will do it once more here, for everyone who does not understand that.

What is DPI?

DPI stands for “dots per inch” and is a term used in printing. It determines how many dots (pixels) are used per every inch of the print. 300 DPI is quite a standard, but one can get printers that can get higher DPI. It’s the same when you scan something, with the DPI showing how many dots you get per every inch of the scanned document.

DPI in photos?

You may have noticed that when you resize a document in Photoshop, you get the resolution setting there. This is the DPI setting. What does this mean? It’s simple. Every photo has a certain resolution in pixels. For instance, a 10Mpix photo will be 3648×2736. If a photo like that is set up for a DPI of 300, it’s print size will be 12.16×9.12 inches. And that’s all DPI does. It creates a conversion from the pixel size to a print size. You can see the different between the pixel size and print size, by zooming to 100% (pixel size) and to print size in the View menu in Photoshop.

DPI in photographyIf you choose not to resample a photo in the resize dialog, and you change the DPI setting, you photo will stay the same in pixel dimensions, just the print size will change. So if you know the dimension in pixels, and you know the DPI you want, you can really easily calculate the print resolution. If you choose to resample and change the DPI, the pixel size will change, while the print size will stay the same.

What to take from this?

First of all, DPI has absolutely no effect on the web or in image files. All web pages and computer graphics are shown based on their width & height in pixels. Even if you sometime see a PPI (pixels per inch) attribute for a monitor, it is never used to calculate the display size of a photo. The photo if shown at 100% will always occupy one pixel of the screen per pixel of the photo.

Secondly, if sometimes ask you for a DPI of a photo, you can answer whatever you want. On it’s own this attribute means nothing and it’s just a parameter that you can set. A proper question would be, what is the pixel resolution, or what is a print size with a certain DPI setting. These are relevant, as they influence the print quality, and how big can you print the photo without using a lower DPI.

Thirdly, if you get the question about DPI from someone who should print your photos, you just got a confirmation that the person does not know what he/she is doing. A photo with low DPI does not have to be a low quality photo (but it can be). It’s just set for a big print size.

As I post photos, from time to time I get questions, asking if this or that thing in them is real. Better said, if I got something because it was really there, or by camera settings or just in Photoshop. And to clarify my point of view on this, here is this post.

I think, every photographer has a line, to what edits he/she is willing to perform on a photo, while still calling it a photo. Anything after this, I would already call a photo-manipulation. This line is quite different for everyone, and also quite different based on what are the photos used for.

I edit my photos quite a lot, as it’s quite obvious from this site. But I try to stay away from photo manipulations, or if I do some, I clearly mark it it the description, that a part of the photo is not real.

Close to the groundSo what I will edit in a photo:

  1. I will remove camera issues, distortion, aberrations, noise, flares and similar
  2. I will remove myself from the photo, stuff like random shadows, or forgot to move my bag and similar
  3. I will remove people, cars, scaffolding, cranes and trash. All this I find very distracting and already try to avoid them when composing a photo. But as everyone knows, sometimes cropping them out will make the photo worse
  4. I will play with brightness, color and sharpness of a photo. This are actually all the things you change in HDR post-processing
  5. sometimes I also remove identifiable markers, like license plates on cars, or copyrighted signs

And then there are things I don’t do, or if I do them I clearly state that they have been done:

  1. fake reflection. If I remember correctly, in the last 5 years, I posted 4 photos with a fake reflection. Still, they all probably had fake in their name :)
  2. fake light effects, flares, light stars and similar
  3. paste in a star sky. If I can’t get a photo of the stars from a certain spot, I won’t create one.
  4. paste in sun, moon, rainbow, people, objects or anything else
  5. combining photos from different locations into one

Then there is also a question about the sky in landscape photos. I don’t mind when someone replaces a sky in their photo, I just don’t do it. I know how to do it, I done it few times, but never posted the result. The reason is that the photo just looks so fake to me afterwards, that I can’t get over it. I just know that it’s not what I have seen.

That’s actually also how I limit my edits. If a photo starts to feel fake to me, I know I have to undo and tone down the edits I did on it.

Btw. I’m not saying I don’t like photo manipulations. I like them, and there are many photographers who create wonders with them. I just like to know, when I’m looking at a photo, if its a photo or a photo manipulation. Sometimes it’s really hard to tell :)

What are you thought? Where is your line?

If you looked at my photos, you would notice that I use Photoengine quite a lot. Over the last few years, it became one of the main tools that I use. And each time I post a process post, I also mention that I used Photoengine to blend the photos and I only tweaked few settings. To make this a little more elaborate, today I will go more into this, and show you my exact process of how I work with Photoengine.

Always before I get into Photoengine, I correct few things in Lightroom. Lens distortions and chromatic aberrations are the ones I correct the most, but from time to time I also correct the white balance. One can of course do this also in Photoengine, but doing that will make it harder to blend in parts of the original exposures back into the photo.

Oloneo Photoengine

Thats why I always use 16-bit tiff files as Photoengine input, as I want all these tweaks already be included. Then I select what files to use. If a series is exposed properly in the middle, the Photoengine result will nicely exposed from the start. But if you include some very bright, or very dark exposures, the base will be either too bright or too dark, and you then have to spend longer time tweaking the settings to get a good result. Much easier to just leave those exposures out, and correct possible over or underexposed areas later.

If there was wind while I took the exposures, I turn on Auto Align. I don’t use the ghost removal as it can result in strange artifacts around the photo, and it makes it just harder to correct later, as it just not even enough.

Once in the HDR tonemaping, I start with changing the strength, usually to something around 40-60. Almost never more. You will see, that when you move the slider, first the shadows will get brighter, and after a certain point, the bright areas will become darker. I would stay lower from this point, and just leave the bright areas as they are. Making them darker, will just make the photo more unrealistic, and creates ugly borders around the dark areas.

Oloneo Photoengine

You will notice that this will also remove all of the contrast from the photo. So the second thing I change, is the first contrast slider. It’s hard to say exactly how much, but if I see that I’m gong very high, and still don’t have enough contrast, I add a little Low dynamic tone contrast. That one usually adds more contrast more quickly.

Sometimes this will make the photo look a little dark, and that can be easily corrected with fine exposure. The last step I do is to check what effect the Natural HDR mode has, if I like the colors more with it on or off. Sometimes I even save both versions, and then blend them together in Photoshop. This is when I like some areas from one with it on, and some areas from one with it off.

And that’s all. From here I save the file as a 16-bit tiff, and continue in Photoshop. Feel free to ask any questions about this in the comments.

One of the ways you can create panoramic photos is also Autopano from Kolor. And today, I will show you how to use it, if you want to create HDR panoramas.

There are two approaches to creating a HDR panorama. Either you first create HDR’s from each series of photos, and then combine the results into a panorama, or your first create a panorama for each exposures of same brightens, and then use those to create the HDR. The second way is much more preferred, as it will avoid unwanted color mismatches. The only exception is 360 degree panoramas, as some tonemapping software does not support repeating edges, and you have to use the opposite approach.

There is also a third option, where you create the HDR result directly in Autopano. It makes things easier, but I personally prefer a result directly from a program specialized for tonemapping. I just  find them better. So In this guide I will show you how to do that. So let’s get started.

Btw. all the screenshots are from Autopano Giga 4 beta 5. and you can find more about it on the Kolor official page.

1. Let’s start in Lightroom. It’s always better to prepare the exposures first, remove chromatic aberrations, remove lens distortion and vignetting and then export the files as 16-bit tiff files. If you have a slow computer and a big panorama, better to go with a jpg, as it’s very computer intensive.
2. Next we need to open the files in Autopano. First create a new group, with the New group button in the bottom left. Now add all the exported files to it, with the Add image in top left of the group window.

HDR panoramas with Autopano1. all source files
HDR panoramas with Autopano2. open files in Autopano

3. This will load all the exposures, and Autopano will detect that this is a HDR. The problem is that they are not split into stacks, so if you just continued like this, you can only save a final HDR, not the layers for separate exposures. And we need that. So don’t select any photos in the group. If you clicked on one, just click on the background and deselect it. Now right click on the background of the group and choose Create stacks by N;
4. You will get a new popup, where you have to choose how many exposures you took for every photo (each series has to be the same) and which one is the main. The main picture is the one we will use for the blending of the panorama, and it will be shown as the preview. So choose the number of exposures, and the main picture somewhere in the middle, around the 0EV exposure. Here the number of exposures per shot were 5 and the middle one was the 3rd one. Confirm with OK.

HDR panoramas with Autopano3. choose create stacks
HDR panoramas with Autopano4. create stacks

5. We are back in the main window now, and we need to tweak the detection settings. Click on the wrench icon in the top left, and in the new popup, let’s look at the Links part. The first settings determines where Autopano looks for control points. Since we set up a main layer for every stack, setting this to The reference level, will use that to detect the control points (it’s actually also the default value). Secondly, if we turn on Use hard links, Autopano will presume that the shots were taken on a tripod, and will do the aligning only one, and than just copy the same settings onto all other exposures. This will result in all panoramas to be the same on export. Click ok to continue, and then on Detect in the top left.
6. The detection will take a while, especially with many photos, and once it done, you will get a new panorama on the right. Click the edit button, to edit the settings of that panorama.

HDR panoramas with Autopano5. change detect settings
HDR panoramas with Autopano6. edit panorama

7. The edit window will open. I will not go to too much detail here, as this tutorial focuses on the HDR aspect, but I will show you few things we need to correct.
8. We need to choose the right projection settings. CLick on Projection settings button and choose the one that fits best to the photo you are editing. For me it’s spherical for this one.

HDR panoramas with Autopano7. editor
HDR panoramas with Autopano8. select projection

9. Another thing I want to turn off is the color correction. This is because I want Autopano do no edits to the images, as I want to use them further in different software. Click on Edit color anchor and choose none. When this is done (and you need no other corrections) close the edit window.
10. The next step is to render the layers. Click on the gear icon and in the render dialog choose: Anti-ghost as the blending present, 16-bit tiff as the file format and layers as the exported data. You can also turn on Remove alpha channel, as we don’t need it. Hit Render to continue.

HDR panoramas with Autopano9. remove corrections
HDR panoramas with Autopano10. choose render settings

11. This will now take a while, but after the waiting, you will get a panorama series, that you can use further in a HDR tone-mapping program. You can save you panorama settings in Autopano, but I usually don’t, as I don’t plan to redo it.
12. From here you just continue as with any other HDR. You can for instance create the HDR in Oloneo Photoengine.

HDR panoramas with Autopano11. render
HDR panoramas with Autopano12. merge HDR

And thats all the steps you need to combine the panoramas for HDR in Autopano. Feel free to ask any questions and also check out my other article on combing photos for HDR panoramas.

Some time ago, I written about how I manually focus my photos, and today I will show you a different way to get the best DOF using the Hyperfocal distance.

What is Hyperfocal distance

In simplest terms, Hyperfocal distance is a distance on which you have to focus your lens, to get the maximum amount of DOF. To say it differently, it’s the closes distance on which you can focus, while still archive a good sharpness for objects at infinity.

The Hyperfocal distance is of course not the same for all situations, as it is dependent in the sensor size, aperture used and focal length. Each time one of these changes, the Hyperfocal distance becomes different.

Additionaly, there is one more parameter, the Circle of confusion. This one is only dependent on the sensor and can be found specifically for every camera. The circle of confusion, is the size a point in the scene can make on your sensor, for it still to appear sharp. The more out of focus it its, the bigger it appears on the sensor. You may know that as bokeh. In some calculators of Hyperfocal distance you can change this value, in some it uses one base on the sensor size. It’s always best to check the one that is exact for you camera.

How to calculate the Hyperfocal distance

You can get yourself the equations an calculate it yourself, but the simplest way is to just print yourself a table with the values (which you can find easily on the internet) or even better use a Hyperfocal calculator app. Here is an example of a table, I made for my camera 5D mark II, simply using the Hyperfocal Pro app for Android.

Hyperfocal distance table
You can see here the different distances for combinations of aperture and focal length.

How to use the Hyperfocal distance

So what now when you know the distance? You turn of auto-focus on your lens, and focus on that distance. You sometimes need to do this just approximately, as the distance scale on a lens is not so detailed. Now you will get everything from half that distance to infinity into focus. I would suggest doing the composition first and having the camera on a tripod, as without that it’s hard to maintain that distance. Also each time you change the aperture or the focal length, you have to recalculate and refocus.

Hyperfocal distance

When to use it

It’s great when you need to have an exact idea about the DOF to get the maximum sharpness. Especially if you are trying to take a photo that includes a foreground element and should be sharp into infinity. In all the other cases, it should be sufficient to focus one third into the scene to get a good result.

Page 14 of 26« First ...101314151620... Last »
FREE EBOOK!!!
Subscribe to my newsletter and get a free Capturing fireworks ebook. 
Subscribe