I’ve updated some information for modified DSLRs in a new post here. The tips below are for use with an unmodified camera. Though this is valuable for a modified camera, there are differences and I’ve chosen to separate the two.
The map above is about where I live. O’Hare airport is a couple miles to my Southwest, 294 runs a couple hundred yards to the west of my back yard and about 300,000 high-pressure sodium street lights are situated to my East and Southeast. I can see Orion, and on an excellent night, five of the Seven Sisters if I’m wearing my specs. Even locating Polaris unaided can be challenging at times. If I’m being completely honest, I had never seen the Milky Way stretch across the sky until my wife and I went on a trip about 160 miles west of Chicago only six months ago – and I’m 42! How in the heck could I image things I can’t even begin to see with the naked eye? A filter, Astrotortilla, Dithering, One option change when stacking, and patience.
Light Pollution Filter
I don’t have a lot of experience with anything, but I know what I use and how it works. I currently use an Optolong CLS 2″ screw in filter. This filter screws into the end of my coma corrector. There is also an in-camera version available that clips right into the camera body. Image processing with this can be extremely difficult if you’re using it with an unmodified camera. Though it’s a debated topic, I’m of the opinion that the use of a custom white balance is essential. I’ve read that you don’t need the custom white balance with a modded camera when using a CLS filter and can just use the daylight white balance setting, but I really don’t know from experience since my camera is not modified. The IDAS LPS P2 and D1 filters are supposed to eliminate the need for balancing.
Some people prefer to process out light pollution and not use a filter, but this post isn’t titled “Imaging in Low-to-Moderate Light Pollution”. This is about severe light pollution, and how even people on the outskirts of Chicago can pull this off. In my situation, I can’t expose filterless for longer than 60 seconds without blowing out the image from skyglow at ISO800. With the filter, I can go 6-8 minutes (depending on conditions) on a dim object at ISO800 before slamming into the right edge of the histogram. My current methods don’t incorporate nearly that long of an exposure, but the potential is there.
Chicago is running a project over the next 2 years to replace street, alley and park lights with LEDs that they say will be aimed properly and reduce light pollution. If they are indeed aimed properly, great, it’s a win-win. If not, we’ll be left out in the not-so-dark. LED lights use more of the visible spectrum and are nearly impossible to filter out against the things you actually want to see. I guess I’ll find out pretty soon on my East and Southeast sky how effective the plan is.
I couldn’t survive without this. My GoTo is terrible, and with most objects I can’t even see the stars near an object for reference. However, with a 15 second exposure, Astrotortilla can take the data, say “It’s over here, idiot”, and center the object I want to image. Before I used this, I wasted hours and hours hunting for star patterns near objects through an eyepiece. I would use this regardless of the mount I had, or the sky I had to navigate, but I think it’s an absolute necessity in light pollution. Click the link below for some setup options for Astrotortilla if you’re having a hard time with it. It’s confusing at first, but fast and effective once you understand what it’s doing and how it’s doing it.
Light Pollution and skyglow cause a whole lot of noise in your images. Without dithering, you’re only amplifying that noise and making it nearly impossible to pull out quality data. When you dither, the noise is randomized and separated from the real signal data you want during stacking and removed from the image altogether. Check out an example of what dithering does by clicking on the link below. I’ll never image without dithering again.
Change in Deep Sky Stacker
By looking at amateur astrophotography images on the internet, you’d think that everything in space is red. A good majority of it is red, but did you know the core of M42 is actually blue/green and not orange/red using an unmodded camera? Yes, hydrogen is by far the most abundant gas in space, but an estimated 7 or so percent of the gas in space is helium, which is orange to yellow at high intensities. Hydrogen is also represented as different colors at different intensities. The point here is that space has a lot of different colors. Capturing all of them is a delicate dance between exposure time and integration time. Setting Deep Sky Stacker to balance the background across channels has always caused my images to blow-out the red channel due to intense skyglow. Instead, I always set it to balance per channel, and then balance the colors myself on the data. Some of this gets into personal preference, but I find it near impossible to get a real color range when balancing the background in DSS. The red from the skyglow is nearly impossible to remove and separate from the data I want. It’s very subtle changes that make a big difference.
This color balance is also something holding me back from modifying my camera. I have no issues or reservations with popping open the camera to remove the filter and add clear glass or an IR cut. I just don’t want to kill the other colors I can see now. There is a trade-off with the modded camera offering more potential detail in less time, and the unmodded camera offering more color variation and taking more time. As with all of this, it’s really a personal preference as to how you want your finished product to appear.
I have since moved on to modify my camera with a Baader BCF-1 filter to maintain the UV/IR cut internally, and also started with an H-alpha filter. I still recommend starting simple to learn techniques for imaging and processing. More information about imaging with a modified camera to combat light pollution can be found here.
Everything is easier at a dark site. Integration seemingly takes no time. Guiding is a dream. Processing data from dark site images is easier. Imaging from pollution requires patience. You can get a result quickly in bright skies, but it won’t be a quality result. The key I’ve found for my situation is to take many more shorter exposures, and dither between every capture. I tried to follow the rules that apply to a lot of other people in this hobby. What I’d end up with was 4-6 hours of blurry data that took 8+ hours to acquire. With shorter exposures and dithering, I’m getting more quality usable data in one quarter that time.
Here’s an example of M81 (Bode’s Galaxy) to illustrate what I mean. They were taken in similar sky conditions with identical equipment. The first image is 4hr-50min of integration with all calibration files taken using the methods that work at locations with less light pollution. The second is only 44 minutes of 120″ exposures, dithered between each image with no darks. I’ve tried all light pollution removal methods and I can’t effectively get rid of the red on the five hour image without destroying the good data. Noise-wise they are similar, but the right image has less than 1/6 the integration time with no time used for darks (not to say that you don’t need darks). The outer bands in the 44 minute image are also quite a bit more defined without the center being totally blown out. By no means is the 44min image complete, but the potential is exciting.
As with anything else, I’m not imaging from your equipment on your skies. I try to generalize as much as possible to make things more applicable to more people, but I can’t guarantee that you’ll be taking better quality images than you were by following what I do. If you’re close to a megalopolis and you’re frustrated with your images looking red, blurry, and being impossible to process, try these methods to clean up your images.