Class is too boring, today I use Python to write a vulnerability scanner for fun, the principle is to detect vulnerabilities first, after scanning backup, file results automatically saved in the current directory

Mainly: information acquisition, simulated attack.

Network vulnerability scanning During vulnerability detection of the target system, the surviving host of the target system is first detected, the ports of the surviving host are scanned to determine the open ports of the system, and the operating system type of the host is identified based on the protocol fingerprint technology. The scanner then identifies the network service type of the open port and determines the network service it provides. According to the operating system platform and network services provided by the target system, the vulnerability scanner calls various known vulnerabilities in the vulnerability database to detect one by one, and determines whether there are vulnerabilities by analyzing the detection response packets.

Therefore, as long as we carefully study various vulnerabilities and know their detection and response characteristic codes, we can use software to realize the simulation of various known vulnerabilities.

Because the vulnerability simulation system actually analyzes whether the detection packet sent by the scanner contains detection characteristic code and returns the packet with corresponding response characteristic code. Therefore, for each vulnerability, detection feature code and response feature code are two necessary descriptions.

By using database technology, newly discovered vulnerabilities can be easily added to vulnerability database, vulnerability simulation software can update vulnerability database constantly, and the ability of scanner to detect security vulnerabilities can be tested more effectively. (HERE, FOR technical reasons, I don’t have a database but a text file to save the signature code.)

Config.txt is the configuration file

Url. TXT is the URL to be scanned

Common editor vulnerabilities and SVN source leak vulnerabilities related to internal configuration

Multithreaded operation

The idea of the program is

Attach the config. TXT

/a.zip /web.zip /web.rar /1.rar /bbs.rar /www.root.rar /123.rar /data.rar /bak.rar /oa.rar /admin.rar /www.rar /2014.rar  /2015.rar /2016.rar /2014.zip /2015.zip /2016.zip /1.zip /1.gz /1.tar.gz /2.zip /2.rar /123.rar /123.zip /a.rar /a.zip /admin.rar /back.rar /backup.rar /bak.rar /bbs.rar /bbs.zip /beifen.rar /beifen.zip /beian.rar /data.rar /data.zip /db.rar /db.zip /flashfxp.rar /flashfxp.zip /fdsa.rar /ftp.rar /gg.rar /hdocs.rar /hdocs.zip /HYTop.mdb /root.rar /Release.rar /Release.zip /sql.rar /test.rar /template.rar /template.zip /upfile.rar /vip.rar /wangzhan.rar Rar /website.rar /www.rar /www.zip /wwwroot. Rar /wwwroot Rar/Create folder. ZipCopy the code

Vulnerability scanning tool.py

# -*- coding:utf-8 -*-
import requests
import time
import Queue
import threading
import urllib2
import socket
timeout=3
socket.setdefaulttimeout(timeout)
q = Queue.Queue()
time.sleep(5)
headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102Safari/537.36'}
f1 = open('url.txt','r')
f2 = f1.readlines()
f1.close()
f3 = open('result.txt','a+')
f9 = open('config.txt','r')  #配置文件载入
f4 = f9.readlines()
f9.close()

def rarzip():       #开始构建扫描函数
    try:    #域名rarzip扫描
        print yumingrar
        reqryumingrar = urllib2.Request(url=yumingrar,headers=headers)
        ryumingrar = urllib2.urlopen(reqryumingrar)
        if ryumingrar.code == 200:
            metarar = ryumingrar.info()
            sizerar = str(metarar.getheaders("Content-Length")[0])  #文件大小
            sizerar1 = int(metarar.getheaders("Content-Length")[0])
            if sizerar1 > 8888:
                print '★★★★★Found A Success Url Maybe backups★★★★★'
                print yumingrar
                print 'Size:' + sizerar + 'Kbs'
                f3.write(yumingrar + '----------' + sizerar + 'Kbs' + '\n')
            else:
                print '888 Safe Dog I Fuck You 888'
        else:
            print '[+]Pass.........................'
    except:
        pass
    try:
        print yumingzip
        reqryumingzip = urllib2.Request(url=yumingzip,headers=headers)
        ryumingzip = urllib2.urlopen(reqryumingrar)
        if ryumingzip.code == 200:
            metazip = ryumingrar.info()
            sizezip = str(metazip.getheaders("Content-Length")[0])
            sizezip1 = int(metazip.getheaders("Content-Length")[0])
            if sizezip1 > 8888:
                print '★★★★★Found A Success Url Maybe backups★★★★★'
                print yumingzip
                print 'Size:' + sizezip + 'Kbs' 
                f3.write(yumingzip + '----------' + sizezip + 'Kbs' + '\n')
            else:
                print '888 Safe Dog I Fuck You 888'
        else:
            print '[+]Pass.........................'
    except:
        pass

def svn():  
    try:    #svn漏洞扫描
        print yumingsvn
        ryumingsvn = requests.get(url=yumingsvn,headers=headers,allow_redirects=False,timeout=3)
        if ryumingsvn_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(yumingsvn + '      【SVN源码泄露漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass

def eweb():
    try:    #ewebeditor漏洞扫描
        print '---------------Ewebeditor Vulnerability Scan---------------'
        print eweb1
        reweb1 = requests.get(url=eweb1,headers=headers,allow_redirects=False,timeout=3)
        if reweb1_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(eweb1 + '      【Ewebeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print eweb2
        reweb2 = requests.get(url=eweb2,headers=headers,allow_redirects=False,timeout=3)
        if reweb2_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(eweb2 + '      【Ewebeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print eweb3
        reweb3 = requests.get(url=eweb3,headers=headers,allow_redirects=False,timeout=3)
        if reweb3_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(eweb3 + '      【Ewebeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print eweb4
        reweb4 = requests.get(url=eweb4,headers=headers,allow_redirects=False,timeout=3)
        if reweb4_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(eweb4 + '      【Ewebeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print eweb5
        reweb5 = requests.get(url=eweb5,headers=headers,allow_redirects=False,timeout=3)
        if reweb5_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(eweb5 + '      【Ewebeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print eweb6
        reweb6 = requests.get(url=eweb6,headers=headers,allow_redirects=False,timeout=3)
        if reweb6_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(eweb6 + '      【Ewebeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass

#fckeditor漏洞扫描
def fck():
    try:
        print '---------------Fckeditor Vulnerability Scan---------------'
        print fck1
        rfck1 = requests.get(url=fck1,headers=headers,allow_redirects=False,timeout=3)
        if rfck1_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(fck1 + '      【Fckeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print fck2
        rfck2 = requests.get(url=fck2,headers=headers,allow_redirects=False,timeout=3)
        if rfck2_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(fck2 + '      【Fckeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print fck3
        rfck3 = requests.get(url=fck3,headers=headers,allow_redirects=False,timeout=3)
        if rfck3_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(fck3 + '      【Fckeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print fck4
        rfck4 = requests.get(url=fck4,headers=headers,allow_redirects=False,timeout=3)
        if rfck4_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(fck4 + '      【Fckeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print fck5
        rfck5 = requests.get(url=fck5,headers=headers,allow_redirects=False,timeout=3)
        if rfck5_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(fck5 + '      【Fckeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
    try:
        print fck6
        rfck6 = requests.get(url=fck6,headers=headers,allow_redirects=False,timeout=3)
        if rfck6_status == 200:
            print '★★★★★Found A Success Url Maybe Vulnerability★★★★★'
            f3.write(fck6 + '      【Fckeditor编辑器漏洞】' + '\n')
        else:
            print '[+]Pass.........................'
    except:
        print "[+]Can not connect url"
        pass
for i in f2:
    c = i.strip('\n')
    print c
    try:
        ceshi = requests.get(url=c,headers=headers,allow_redirects=False,timeout=3)
        if ceshi.status_code == 200:
            a = c.split(".",2)[1]  #获取主域名
            yumingrar = c + '/' + a + '.rar'  #构造域名 + zip 的备份
            yumingzip = c + '/' + a + '.zip'
            rarzip()
            #开始对一系列特殊漏洞后缀构造url
            yumingsvn = c + '/.svn/entries'  #svn漏洞
            svn()
            eweb1 = c + '/editor/editor/filemanager/browser/default/connectors/test.html'   #ewebeditor编辑器漏洞
            eweb2 = c + '/editor/editor/filemanager/connectors/test.html'
            eweb3 = c + '/editor/editor/filemanager/connectors/uploadtest.html'
            eweb4 = c + '/html/db/ewebeditor.mdb'
            eweb5 = c + '/db/ewebeditor.mdb'
            eweb6 = c + '/db/ewebeditor.asp'
            eweb()
            fck1 = c + '/fckeditor/editor/filemanager/browser/default/connectors/test.html'  #fckeditor编辑器漏洞
            fck2 = c + '/fckeditor/editor/filemanager/connectors/test.html'
            fck3 = c + '/FCKeditor/editor/filemanager/connectors/uploadtest.html'
            fck4 = c + '/FCKeditor/editor/filemanager/upload/test.html'
            fck5 = c + '/fckeditor/editor/filemanager/browser/default/browser.html'
            fck6 = c + '/FCKeditor/editor/fckeditor.html'
            fck()
        else:
            pass
    except:
        print "NO USE URL WHAT FUCK A BIG URL"
        pass

for i in f2:
    c = i.strip('\n')
    try:
        ce = requests.get(url=c,headers=headers,allow_redirects=False,timeout=3)
        if ce.status_code == 200:
            q.put(c)
        else:
            pass
    except:
        print "NO USE URL WHAT FUCK A BIG URL"
        pass
def starta():
    print '---------------Start Backups Scan---------------'    #开始从字典载入了~
    while not q.empty():
            zhaohan = q.get()  #url网址载入队列了
            for f5 in f4:
                f6 = f5.strip('\n')  #正确的备份内容     
                urlx = zhaohan + f6  #正确的网址 + 备份
                print urlx
                try:
                    req = urllib2.Request(url=urlx,headers=headers)
                    response = urllib2.urlopen(req)
                    if response.code == 200:
                        meta = response.info()
                        sizes = str(meta.getheaders("Content-Length")[0])
                        sizess = int(meta.getheaders("Content-Length")[0])
                        if sizess < 8888:
                            print '888  Safe Dog I Fuck You  888'
                        else:
                            print '★★★★★Found A Success Url Maybe backups★★★★★'
                            print 'Size:' + sizes + 'Kbs'
                            f3.write(urlx + '----------' + sizes + '\n')
                    else:
                        print '[+]Pass.........................'
                except:
                    pass
thread1 = threading.Thread(target = starta())
thread1.start()
f3.close()
print '--------------------------------------------------------------------'
print '--------------------------------OVER--------------------------------'
print '--------------------------------------------------------------------'
time.sleep(10)
exit() 
Copy the code

The effect seems to be quite obvious:

Then I looked at what the gods had written and found a more concise way of writing:

SRC is automatically swiped by collecting all of the target’s domain names.

 

According to the rules to scan some paths, such as Tomcat, Jboss, WebLogic, SVN, Jenkins, backup files, etc., and the general background directory scan is not the same. So if you want to specify the type of scan later can be added freely.

 

If the rule rule needs to be added, please comment + rules at the back of the post. If the code needs to be modified, please private me.

 

Dirfinder. py is the scan script

TXT is a rule and can be added freely. Open with editplus, notepad++, etc. If you use notepad, there will be no line breaks.

TXT is the URL address to be scanned

The scanning results will be automatically generated in vulur.txt

Rule. TXT is as follows:

/wls-wsat/CoordinatorPortType
/wls-wsat/CoordinatorPortType11
:9443/wls-wsat/CoordinatorPortType
:9443/wls-wsat/CoordinatorPortType11
:8470/wls-wsat/CoordinatorPortType
:8470/wls-wsat/CoordinatorPortType11
:8447/wls-wsat/CoordinatorPortType
:8447/wls-wsat/CoordinatorPortType11
:8080
:8007
/asynchPeople
/manage
/script
:8080/jenkins
:8007/jenkins
/jenkins
/.svn/entries
/.svn
/console/
/manager
:8080/manager
:8080/manager/html
/manager/html
/invoker/JMXInvokerServlet
/invoker
:8080/jmx-console/
/jmx-console/
/robots.txt
/system
/wls-wsat/CoordinatorPortType
/wsat/CoordinatorPortType
/wls-wsat/CoordinatorPortType11
/wsat/CoordinatorPortType11
/examples/
/examples/servlets/servlet/SessionExample
/solr/
/.git/config
/.git/index
/.git/HEAD
/WEB-INF/
/core
/old.zip
/old.rar
/old.tar.gz
/old.tar.bz2
/old.tgz
/old.7z
/temp.zip
/temp.rar
/temp.tar.gz
/temp.tgz
/temp.tar.bz2
/package.zip
/package.rar
/package.tar.gz
/package.tgz
/package.tar.bz2
/tmp.zip
/tmp.rar
/tmp.tar.gz
/tmp.tgz
/tmp.tar.bz2
/test.zip
/test.rar
/test.tar.gz
/test.tgz
/test.tar.bz2
/backup.zip
/backup.rar
/backup.tar.gz
/backup.tgz
/back.tar.bz2
/db.zip
/db.rar
/db.tar.gz
/db.tgz
/db.tar.bz2
/db.log
/db.inc
/db.sqlite
/db.sql.gz
/dump.sql.gz
/database.sql.gz
/backup.sql.gz
/data.zip
/data.rar
/data.tar.gz
/data.tgz
/data.tar.bz2
/database.zip
/database.rar
/database.tar.gz
/database.tgz
/database.tar.bz2
/ftp.zip
/ftp.rar
/ftp.tar.gz
/ftp.tgz
/ftp.tar.bz2
/log.txt
/log.tar.gz
/log.rar
/log.zip
/log.tgz
/log.tar.bz2
/log.7z
/logs.txt
/logs.tar.gz
/logs.rar
/logs.zip
/logs.tgz
/logs.tar.bz2
/logs.7z
/web.zip
/web.rar
/web.tar.gz
/web.tgz
/web.tar.bz2
/www.log
/www.zip
/www.rar
/www.tar.gz
/www.tgz
/www.tar.bz2
/wwwroot.zip
/wwwroot.rar
/wwwroot.tar.gz
/wwwroot.tgz
/wwwroot.tar.bz2
/output.zip
/output.rar
/output.tar.gz
/output.tgz
/output.tar.bz2
/admin.zip
/admin.rar
/admin.tar.gz
/admin.tgz
/admin.tar.bz2
/upload.zip
/upload.rar
/upload.tar.gz
/upload.tgz
/upload.tar.bz2
/website.zip
/website.rar
/website.tar.gz
/website.tgz
/website.tar.bz2
/package.zip
/package.rar
/package.tar.gz
/package.tgz
/package.tar.bz2
/sql.log
/sql.zip
/sql.rar
/sql.tar.gz
/sql.tgz
/sql.tar.bz2
/sql.7z
/sql.inc
/data.sql
/qq.sql
/tencent.sql
/database.sql
/db.sql
/test.sql
/admin.sql
/backup.sql
/user.sql
/sql.sql
/index.zip
/index.7z
/index.bak
/index.rar
/index.tar.tz
/index.tar.bz2
/index.tar.gz
/dump.sql
/old.zip
/old.rar
/old.tar.gz
/old.tar.bz2
/old.tgz
/old.7z
/1.tar.gz
/a.tar.gz
/x.tar.gz
/o.tar.gz
/conf/conf.zip
/conf.tar.gz
/qq.pac
/tencent.pac
/server.cfg
/deploy.tar.gz
/build.tar.gz
/install.tar.gz
/secu-tcs-agent-mon-safe.sh
/password.tar.gz
/site.tar.gz
/tenpay.tar.gz
/rsync_log.sh
/rsync.sh
/webroot.zip
/tools.tar.gz
/users.tar.gz
/webserver.tar.gz
/htdocs.tar.gz
/admin/
/admin.php
/admin.do
/login.php
/login.do
/admin.html
/manage/
/server-status
/login/
/fckeditor/_samples/default.html
/ckeditor/samples/
/fck/editor/dialog/fck_spellerpages/spellerpages/server-scripts/spellchecker.php
/fckeditor/editor/dialog/fck_spellerpages/spellerpages/server-scripts/spellchecker.php
/app/config/database.yml
/database.yml
/sqlnet.log
/database.log
/db.log
/db.conf
/db.ini
/logs.ini
/upload.do
/upload.jsp
/upload.php
/upfile.php
/upload.html
/upload.cgi
/jmx-console/HtmlAdaptor
/cacti/
/zabbix/
/jira/
/jenkins/static/f3a41d2f/css/style.css
/static/f3a41d2f/css/style.css
/exit
/memadmin/index.php
/phpmyadmin/index.php
/pma/index.php
/ganglia/
/_phpmyadmin/index.php
/pmadmin/index.php
/config/config_ucenter.php.bak
/config/.config_ucenter.php.swp
/config/.config_global.php.swp
/config/config_global.php.1
/uc_server/data/config.inc.php.bak
/config/config_global.php.bak
/include/config.inc.php.tmp
/access.log
/error.log
/log/access.log
/log/error.log
/log/log.log
/logs/error.log
/logs/access.log
/error.log
/errors.log
/debug.log
/log
/logs
/debug.txt
/debug.out
/.bash_history
/.rediscli_history
/.bashrc
/.bash_profile
/.bash_logout
/.vimrc
/.DS_Store
/.history
/.htaccess
/htaccess.bak
/.htpasswd
/.htpasswd.bak
/htpasswd.bak
/nohup.out
/.idea/workspace.xml
/.mysql_history
/httpd.conf
/web.config
/shell.php
/1.php
/spy.php
/phpspy.php
/webshell.php
/angle.php
/resin-doc/resource/tutorial/jndi-appconfig/test?inputFile=/etc/profile
/resin-doc/viewfile/?contextpath=/&servletpath=&file=index.jsp
/application/configs/application.ini
/wp-login.php
/wp-config.inc
/wp-config.bak
/wp-config.php~
/.wp-config.php.swp
/wp-config.php.bak
/.ssh/known_hosts
/.ssh/known_hosts
/.ssh/id_rsa
/id_rsa
/.ssh/id_rsa.pub
/.ssh/id_dsa
/id_dsa
/.ssh/id_dsa.pub
/.ssh/authorized_keys
/owa/
/ews/
/readme
/README
/readme.md
/readme.html
/changelog.txt
/data.txt
/CHANGELOG.txt
/CHANGELOG.TXT
/install.txt
/install.log
/install.sh
/deploy.sh
/install.txt
/INSTALL.TXT
/config.php
/config/config.php
/config.inc
/config.inc.php
/config.inc.php.1
/config.php.bak
/db.php.bak
/conf/config.ini
/config.ini
/config/config.ini
/configuration.ini
/configs/application.ini
/settings.ini
/application.ini
/conf.ini
/app.ini
/config.json
/output
/a.out
/test
/tmp
/temp
/user.txt
/users.txt
/key
/keys
/key.txt
/keys.txt
/pass.txt
/passwd.txt
/password.txt
/pwd.txt
/php.ini
/sftp-config.json
/index.php.bak
/.index.php.swp
/index.cgi.bak
/config.inc.php.bak
/.config.inc.php.swp
/config/.config.php.swp
/.config.php.swp
/app.cfg
/setup.sh
/../../../../../../../../../../../../../etc/passwd
/../../../../../../../../../../../../../etc/hosts
/../../../../../../../../../../../../../etc/sysconfig/network-scripts/ifcfg-eth1
/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/%c0%ae%c0%ae/etc/hosts
/..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2Fetc%2Fpasswd
/..%252F..%252F..%252F..%252F..%252F..%252F..%252F..%252F..%252Fetc%252Fpasswd
/%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2f%2e%2e%2fetc%2fpasswd
//././././././././././././././././././././././././../../../../../../../../etc/passwd
/etc/passwd
/file:///etc/passwd
/etc/hosts
/aa/../../cc/../../bb/../../dd/../../aa/../../cc/../../bb/../../dd/../../bb/../../dd/../../bb/../../dd/../../bb/../../dd/../../ee/../../etc/hosts
/proc/meminfo
/etc/profile
/resource/tutorial/jndi-appconfig/test?inputFile=/etc/passwd
/WEB-INF/web.xml
/WEB-INF/web.xml.bak
/WEB-INF/applicationContext.xml
/WEB-INF/applicationContext-slave.xml
/WEB-INF/config.xml
/WEB-INF/spring.xml
/WEB-INF/struts-config.xml
/WEB-INF/struts-front-config.xml
/WEB-INF/struts/struts-config.xml
/WEB-INF/classes/spring.xml
/WEB-INF/classes/struts.xml
/WEB-INF/classes/struts_manager.xml
/WEB-INF/classes/conf/datasource.xml
/WEB-INF/classes/data.xml
/WEB-INF/classes/config/applicationContext.xml
/WEB-INF/classes/applicationContext.xml
/WEB-INF/classes/conf/spring/applicationContext-datasource.xml
/WEB-INF/config/db/dataSource.xml
/WEB-INF/spring-cfg/applicationContext.xml
/WEB-INF/dwr.xml
/WEB-INF/classes/hibernate.cfg.xml
/WEB-INF/classes/rabbitmq.xml
/WEB-INF/database.properties
/WEB-INF/web.properties
/WEB-INF/log4j.properties
/WEB-INF/classes/dataBase.properties
/WEB-INF/classes/application.properties
/WEB-INF/classes/jdbc.properties
/WEB-INF/classes/db.properties
/WEB-INF/classes/conf/jdbc.properties
/WEB-INF/classes/security.properties
/WEB-INF/conf/database_config.properties
/WEB-INF/config/dbconfig
/WEB-INF/conf/activemq.xml
/server.xml
/config/database.yml
/configprops
/phpinfo.php
/phpinfo.php5
/info.php
/php.php
/pi.php
/mysql.php
/sql.php
/shell.php
/apc.php
/test.sh
/logs.sh
/test/
/test.php
/temp.php
/tmp.php
/test2.php
/test2.php
/test.html
/test2.html
/test.txt
/test2.txt
/debug.php
/a.php
/b.php
/t.php
/i.php
/x.php
/1.php
/123.php
/test.cgi
/test-cgi
/cgi-bin/test-cgi
/cgi-bin/test
/cgi-bin/test.cgi
/zabbix/jsrpc.php
/jsrpc.php
Copy the code

dirFinder.py

#! /usr/bin/env python # -*- coding:utf-8 -*- #from flask import Flask, request, json, Response, jsonify import json import threading import requests import urllib2 import sys import threading from time import ctime,sleep import threadpool #app = Flask(__name__) #@app.route('/', methods = ['GET','POST']) def main(): #if request.method == 'GET': #geturl = request.args.get('geturl') f = open("url.txt") line = f.readlines() global g_list g_list = [] urllist = [] list1 = [] for u in line: u = u.rstrip() #dir = ['/admin','/t/344205'] dir = open("rule.txt") dirline = dir.readlines() for d in dirline: d = d.rstrip() scheme = ['http://','https://'] for s in scheme: #print type(s) #print type(geturl) #print type(d) url = s + u + d list1.append(url) thread_requestor(list1) #return json.dumps(g_list) f = open('vulurl.txt','w') f.write(json.dumps(g_list)) f.close() def res_printer(res1,res2): if res2: g_list.append(res2) else: pass def thread_requestor(urllist): pool = threadpool.ThreadPool(200) reqs = threadpool.makeRequests(getScan,urllist,res_printer) [pool.putRequest(req) for req in reqs] pool.wait() def getScan(url): try: requests.packages.urllib3.disable_warnings() status = requests.get(url, allow_redirects=False, timeout=3,verify=False).status_code print "scanning " + url if status == 200: return url else: pass except: pass if __name__ == "__main__": main()Copy the code

The scan results are as follows:

To sum up, the function is still very powerful, learn from each other ~~~