You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that you're working on a python based AstroPhotography project and I would like to contribute if possible. Specifically, I would like to work on developing a set of python functions which can take raw video data and use super resolution techniques to convert the video data into a high resolution image of an object. For an example of how something like this might work please see the attached PDF.
My thoughts on how this would work from a user perspective is as follows: (in this case assume a bright source such as a planet)
Take a series of one to two minute videos using RGB filters -> Load the files into a Jupyter Notebook -> Apply super resolution methods to each video to produce a high quality set of RGB images -> Export the image data either to other functions within this project or two a different program.
To work on this I would need access to raw RGB video data of a well known object (Saturn, Jupiter etc.). Other than that I have a fairly powerful computer on which I can test these methods.
The text was updated successfully, but these errors were encountered:
I hadn't considered dealing with video, although I not opposed in principle. Although you might be better served with an astrophotography project that already is designed around video and sequences of images. For example, lxnstack or one of its forks, although there may other projects too.
lxnstack comes to mind because its aim is/was to identify the best frames in a sequence and align them to get rid of the normal shaking/jitter of amateur astronomical video. The only downside is it was written in python2 and isn't actively being developed, so the original version may not be easy to get working and various people's forks to convert it to python3 hadn't got far, last time I checked.
As for example video, I don't have any examples I've taken myself, but you can find various examples on the web that are typical of what can be done with normal amateur instruments, e.g this one.
Thanks for your response! Sorry for taking so long to reply to your comment but I have been busily searching and testing various methods for improving with achromatic telescopes. Currently I have two ideas in mind where one involves N-dimensional interpolation to generate extra 'frames' for image stacking programs. The second idea is to use two different images obtained with different apertures to digitally 'remove' axial chromatic aberration. I'll have to flesh out the code and collect some test images before I know whether or not these methods work. If you're interested I can come back to this set of comments later in the Fall when I have made more progress.
Frame-Recurrent Video Super-Resolution.pdf
Hello,
I noticed that you're working on a python based AstroPhotography project and I would like to contribute if possible. Specifically, I would like to work on developing a set of python functions which can take raw video data and use super resolution techniques to convert the video data into a high resolution image of an object. For an example of how something like this might work please see the attached PDF.
My thoughts on how this would work from a user perspective is as follows: (in this case assume a bright source such as a planet)
Take a series of one to two minute videos using RGB filters -> Load the files into a Jupyter Notebook -> Apply super resolution methods to each video to produce a high quality set of RGB images -> Export the image data either to other functions within this project or two a different program.
To work on this I would need access to raw RGB video data of a well known object (Saturn, Jupiter etc.). Other than that I have a fairly powerful computer on which I can test these methods.
The text was updated successfully, but these errors were encountered: