When you sacrifice sleep you can accomplish great things (for a brief period of time). Finally, after about a year of work, I have come up with a workflow to automatically enhance my stellar photography.
The image above was produced with my collection of 518 source images taken last year. I’m not sure how much better I will be able to make it (ignoring the fact that I haven’t yet corrected for the coma-distortion in the corners). The reconstruction was a fun process which I will write about later, but the following animation demonstrates how the various stages look.
This reconstruction is mainly a combination of two algorithms: the first aligns and sums the individual frames, reducing the noise and increasing the SNR (signal-to-noise ratio); the second takes an inferred PSF (point-spread function) and performs an iterative deconvolution to estimate the original unburied scene.
The biggest surprise during this project was discovering that the pictures I took were out of focus. In the first frame of the animation, you can see that each star looks like a small donut – a ring around a dark center. This is most likely caused by the fact that the point where all the light should have landed on the camera sensor was actually behind or in front of where it needed to be. Thus, every spot in one of those tiny rings represents the same physical spot in space.
Luckily, by using a bit of math and lots of processing power I was able to remove that out-of-focus blur and recover the sharper stars, now appearing as points. The image produced is literally a higher-resolution image than is physically possible for the camera to capture. It’s only by the application of algorithms like these that we can cheat reality to admire the treasures hidden inside.
Next step? Apply this process to some of my other more-recent astrophotography, then try and capture a portion of space through a telescope and capture galactic arms.