1. Not sure what you mean exactly. The program does detect https urls. If you are referring to the failed attempts during bug scanning Most of them are commits without a url link to a bug, the others are url links that fail are the extracted links that start with something like <<
https://blablabla.com>> or have other leading or trailing characters or commits with multilple bug links because of the sloppy fashion in which I extracted url's. It would be an easy fix really. The code is sloppy because I was always in a huge hurry to get as much done in as little time as I could.
2. Easy to add if I have time.
3. I wrote the parser before I learned about the threading module in python. It would be really easy to not lose control of the gui using threading. I just didn't think many people were really going to create their own database so I haven't re-written it. I could create a thread daemon and do 5-10 scans at once reducing the scan time dramatically, again, not much time man.
4.Nah!
5. The hosting does work with files that depend on other files. It looks for the file that you paste into the entry field, if it finds the file it starts the server in THAT directory. It makes a copy of the file and then renames that file to index.html. If index.html already exists it is simply overwritten each time.