一个银行业通杀的敏感信息泄漏通用漏洞

无意间发现银行的门户站点,都可能存在敏感文件泄漏,甚至有些敏感信息存在xwork,不过存在防火墙没有进一步验证,绕过防火墙。这个通用漏洞主要是泄漏了内部上传的一些.doc .pdf .xml .ppt .rar等文件信息,也有敏感的url等。从银行排行榜依次测试,20多个银行都适用,继续找的话都是类似的了。
首先用wapiti扫描一下,wapiti http://bank.com –v 2 ,发现wapiti的爬虫模块特别强,能挖掘出很多ajax的url,有的用unican-gui扫描,有些wapiti发现不了,uniscan可以发现了,总体来说wapiti的爬虫模块要强势一些。uniscan-gui是图形化的,直接输入上网址就好了,检查目录和文件,或者全选收集其他信息也可以,有些xss会直接被扫出来。
如果用unscan-gui扫描出的话,会出现很多敏感文档url,这些url就有可能是敏感文件,要一一下载检查后,才能确定是不是敏感文件,手工下载太慢了,用python脚本写了个自动下载的功能,如下:

import requests,re

headers={       "User-Agent":"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36",
                "Accept":"*/*",
                "Accept-Language":"zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3",
                "Accept-Encoding":"gzip, deflate",
                "Content-Type":"application/x-www-form-urlencoded; charset=UTF-8",
                "Connection":"keep-alive",
                "X-Requested-With":"XMLHttpRequest",
        }

#断点下载文件函数
def downfile(url,headers):
    url = url.replace('\t','').replace('\n','').replace(' ','')
    filename = url.split('/')[-1].replace('\t','').replace('\n','').replace(' ','') #根据网址取文件名,并去掉空格和换行
    #print filename
    headers['referer'] = re.findall(r"^http://.*?/{1}",url,re.I)[0]  # referer
    data = requests.get(url = url,stream=True,verify=False,headers = headers)
    with open(filename, 'wb') as f:  
        for file in data.iter_content(chunk_size=1024):  
            if file: # 文件流形式写入
                f.write(file)  
                f.flush()
        f.close()
    print u"文件",filename,u"文件下载完成!"

#downfile("http://www.*bank.com.cn/upload/Attach/mrbj/2702888780.pdf",headers)

for line in open("1.txt"):
    downfile(line,headers)
else:
    print "exit!"

自动下载下来后,检查那些不是允许所有用户下载的文件好了。
如果用wapiti扫描的话,爬虫结果中会有所有的url,这种情况下要清理下数据,才能做下载,于是也写了个清理数据的小脚本辅助查漏洞,如下:

import requests,re
file_object = open('1.txt', 'w')
#读取文件的每一行
for line in open("bak.txt"):
    if not ".html" in line:
        print line
        file_object.write(line)
else:
    print "exit!"
    file_object.close()

要根据实际情况修改脚本文件。
之后又晓得一个开源的工具 metagoofil,通过谷歌去收集文件,这款工具的原理和wapiti爬虫是不一样的哦、

KaLi WEB程序 paros 使用

parosy也是一款代理工具,实现中间人攻击,原本想看苹果手机发短信的数据包,觉得应该不是http数据包,这款软件貌似只能抓http协议的,socker就不支持了,只能用wireshark了。用parosy试了下,果然不行,和burpsuite的几个模块有点像,kali2.0既然新增了他,一定在哪方面有他的优势。使用方式,首先找到 option->local proxy 进去设置好代理ip和端口,然后就可以正常工作了,如下:
kali-parosy
使用parosy抓苹果手机的包,多了一个ip,此时ios正在更新应用,可能是ios的应用,发短信 imessage 提示发送失败,回头用wireshark抓包再研究咯。

KaLi WEB程序 commix 使用

commixm貌似出好久了,大牛门早用过了,自动命令注入的工具,网上资料很全,在之前的kali1.0中没有见过此工具,kali2.0增加了此工具。

+–
Automated All-in-One OS Command Injection and Exploitation Tool
Copyright (c) 2014-2016 Anastasios Stasinopoulos (@ancst)
+–

Usage: python commix.py [options]

Options:
-h, –help??????????? Show help and exit.

General:#综合
These options relate to general matters.

-v VERBOSE????????? Verbosity level: 0-1 (default 0)
–version?????????? Show version number and exit.
–output-dir=OUT..? Set custom output directory path.
-s SESSION_FILE???? Load session from a stored (.sqlite) file.
–flush-session???? Flush session files for current target.
–ignore-session??? Ignore results stored in session file.

Target:#目标参数
This options has to be provided, to define the target URL.

-u URL, –url=URL?? Target URL.
–url-reload??????? Reload target URL after command execution.
-l LOGFILE????????? Parse target and data from HTTP proxy log file.

Request:#请求参数
These options can be used to specify how to connect to the target URL.

–data=DATA???????? Data string to be sent through POST.
–host=HOST???????? HTTP Host header.
–referer=REFERER?? HTTP Referer header.
–user-agent=AGENT? HTTP User-Agent header.
–random-agent????? Use a randomly selected HTTP User-Agent header.
–param-del=PDEL??? Set character for splitting parameter values.
–cookie=COOKIE???? HTTP Cookie header.
–cookie-del=CDEL?? Set character for splitting cookie values.
–headers=HEADERS?? Extra headers (e.g. ‘Header1:Value1\nHeader2:Value2’).
–proxy=PROXY?????? Use a HTTP proxy (e.g. ‘127.0.0.1:8080’).
–tor?????????????? Use the Tor network.
–tor-port=TOR_P..? Set Tor proxy port (Default: 8118).
–auth-url=AUTH_..? Login panel URL.
–auth-data=AUTH..? Login parameters and data.
–auth-type=AUTH..? HTTP authentication type (e.g. ‘Basic’ or ‘Digest’).
–auth-cred=AUTH..? HTTP authentication credentials (e.g. ‘admin:admin’).
–ignore-401??????? Ignore HTTP error 401 (Unauthorized).
–force-ssl???????? Force usage of SSL/HTTPS.

Enumeration:#遍历参数
These options can be used to enumerate the target host.

–all?????????????? Retrieve everything.
–current-user????? Retrieve current user name.
–hostname????????? Retrieve current hostname.
–is-root?????????? Check if the current user have root privileges.
–is-admin????????? Check if the current user have admin privileges.
–sys-info????????? Retrieve system information.
–users???????????? Retrieve system users.
–passwords???????? Retrieve system users password hashes.
–privileges??????? Retrieve system users privileges.
–ps-version??????? Retrieve PowerShell’s version number.

File access:#文件列表参数
These options can be used to access files on the target host.

–file-read=FILE..? Read a file from the target host.
–file-write=FIL..? Write to a file on the target host.
–file-upload=FI..? Upload a file on the target host.
–file-dest=FILE..? Host’s absolute filepath to write and/or upload to.

Modules:#模型参数
These options can be used increase the detection and/or injection
capabilities.

–icmp-exfil=IP_..? The ‘ICMP exfiltration’ injection module.
(e.g. ‘ip_src=192.168.178.1,ip_dst=192.168.178.3’).
–dns-server=DNS..? The ‘DNS exfiltration’ injection module.
(Domain name used for DNS exfiltration attack).
–shellshock??????? The ‘shellshock’ injection module.

Injection:#注入参数
These options can be used to specify which parameters to inject and to
provide custom injection payloads.

-p TEST_PARAMETER?? Testable parameter(s).
–suffix=SUFFIX???? Injection payload suffix string.
–prefix=PREFIX???? Injection payload prefix string.
–technique=TECH??? Specify injection technique(s) to use.
–maxlen=MAXLEN???? Set the max length of output for time-related
injection techniques (Default: 10000 chars).
–delay=DELAY?????? Set custom time delay for time-related injection
techniques (Default: 1 sec).
–tmp-path=TMP_P..? Set the absolute path of web server’s temp directory.
–root-dir=SRV_R..? Set the absolute path of web server’s root directory.
–alter-shell=AL..? Use an alternative os-shell (e.g. ‘Python’).
–os-cmd=OS_CMD???? Execute a single operating system command.
–os=OS???????????? Force back-end operating system to this value.
–tamper=TAMPER???? Use given script(s) for tampering injection data.

Detection:#检查级别参数
These options can be used to customize the detection phase.

–level=LEVEL?????? Level of tests to perform (1-3, Default: 1).
–skip-calc???????? Skip the mathematic calculation during the detection
phase.

Miscellaneous:#混合式参数
–dependencies????? Check for third-party (non-core) dependencies.
–skip-waf????????? Skip heuristic detection of WAF/IPS/IDS protection.

基本的试用下:
commix -u http://0535code.com/index.php –data id=1 –cookies PHPSESSIONID=test #基本使用的参数有点类似sqlmap,以后遇到相关url再用这款自动化命令注入工作试试看效果。

KaLi WEB程序 HTTrack 使用

HTTrack 这个工具以前一直在win下用,用过几个爬虫软件,最后就觉得HTTrack好,爬的相当全,当然不适合定制开发,爬取网站页面本地化,此乃神奇。win下使用很简单,填上网址,下一步下一步就好了。如下:
windows-httrack
kali下httrack需要配置很多参数,说明如下:

HTTrack version 3.48-24
usage: httrack <URLs> [-option] [+<URL_FILTER>] [-<URL_FILTER>] [+<mime:MIME_FILTER>] [-<mime:MIME_FILTER>]
with options listed below: (* is the default value)

General options:
O? path for mirror/logfiles+cache (-O path_mirror[,path_cache_and_logfiles]) (–path <param>)

Action options:
w *mirror web sites (–mirror)
W? mirror web sites, semi-automatic (asks questions) (–mirror-wizard)
g? just get files (saved in the current directory) (–get-files)
i? continue an interrupted mirror using the cache (–continue)
Y?? mirror ALL links located in the first level pages (mirror links) (–mirrorlinks)

Proxy options:
P? proxy use (-P proxy:port or -P user:pass@proxy:port) (–proxy <param>)
%f *use proxy for ftp (f0 don’t use) (–httpproxy-ftp[=N])
%b? use this local hostname to make/send requests (-%b hostname) (–bind <param>)

Limits options:
rN set the mirror depth to N (* r9999) (–depth[=N])
%eN set the external links depth to N (* %e0) (–ext-depth[=N])
mN maximum file length for a non-html file (–max-files[=N])
mN,N2 maximum file length for non html (N) and html (N2)
MN maximum overall size that can be uploaded/scanned (–max-size[=N])
EN maximum mirror time in seconds (60=1 minute, 3600=1 hour) (–max-time[=N])
AN maximum transfer rate in bytes/seconds (1000=1KB/s max) (–max-rate[=N])
%cN maximum number of connections/seconds (*%c10) (–connection-per-second[=N])
GN pause transfer if N bytes reached, and wait until lock file is deleted (–max-pause[=N])

Flow control:
cN number of multiple connections (*c8) (–sockets[=N])
TN timeout, number of seconds after a non-responding link is shutdown (–timeout[=N])
RN number of retries, in case of timeout or non-fatal errors (*R1) (–retries[=N])
JN traffic jam control, minimum transfert rate (bytes/seconds) tolerated for a link (–min-rate[=N])
HN host is abandonned if: 0=never, 1=timeout, 2=slow, 3=timeout or slow (–host-control[=N])

Links options:
%P *extended parsing, attempt to parse all links, even in unknown tags or Javascript (%P0 don’t use) (–extended-parsing[=N])
n? get non-html files ‘near’ an html file (ex: an image located outside) (–near)
t? test all URLs (even forbidden ones) (–test)
%L <file> add all URL located in this text file (one URL per line) (–list <param>)
%S <file> add all scan rules located in this text file (one scan rule per line) (–urllist <param>)

Build options:
NN structure type (0 *original structure, 1+: see below) (–structure[=N])
or user defined structure (-N “%h%p/%n%q.%t”)
%N? delayed type check, don’t make any link test but wait for files download to start instead (experimental) (%N0 don’t use, %N1 use for unknown extensions, * %N2 always use)
%D? cached delayed type check, don’t wait for remote type during updates, to speedup them (%D0 wait, * %D1 don’t wait) (–cached-delayed-type-check)
%M? generate a RFC MIME-encapsulated full-archive (.mht) (–mime-html)
LN long names (L1 *long names / L0 8-3 conversion / L2 ISO9660 compatible) (–long-names[=N])
KN keep original links (e.g. http://www.adr/link) (K0 *relative link, K absolute links, K4 original links, K3 absolute URI links, K5 transparent proxy link) (–keep-links[=N])
x? replace external html links by error pages (–replace-external)
%x? do not include any password for external password protected websites (%x0 include) (–disable-passwords)
%q *include query string for local files (useless, for information purpose only) (%q0 don’t include) (–include-query-string)
o *generate output html file in case of error (404..) (o0 don’t generate) (–generate-errors)
X *purge old files after update (X0 keep delete) (–purge-old[=N])
%p? preserve html files ‘as is’ (identical to ‘-K4 -%F “”‘) (–preserve)
%T? links conversion to UTF-8 (–utf8-conversion)

Spider options:
bN accept cookies in cookies.txt (0=do not accept,* 1=accept) (–cookies[=N])
u? check document type if unknown (cgi,asp..) (u0 don’t check, * u1 check but /, u2 check always) (–check-type[=N])
j *parse Java Classes (j0 don’t parse, bitmask: |1 parse default, |2 don’t parse .class |4 don’t parse .js |8 don’t be aggressive) (–parse-java[=N])
sN follow robots.txt and meta robots tags (0=never,1=sometimes,* 2=always, 3=always (even strict rules)) (–robots[=N])
%h? force HTTP/1.0 requests (reduce update features, only for old servers or proxies) (–http-10)
%k? use keep-alive if possible, greately reducing latency for small files and test requests (%k0 don’t use) (–keep-alive)
%B? tolerant requests (accept bogus responses on some servers, but not standard!) (–tolerant)
%s? update hacks: various hacks to limit re-transfers when updating (identical size, bogus response..) (–updatehack)
%u? url hacks: various hacks to limit duplicate URLs (strip //, www.foo.com==foo.com..) (–urlhack)
%A? assume that a type (cgi,asp..) is always linked with a mime type (-%A php3,cgi=text/html;dat,bin=application/x-zip) (–assume <param>)
shortcut: ‘–assume standard’ is equivalent to -%A php2 php3 php4 php cgi asp jsp pl cfm nsf=text/html
can also be used to force a specific file type: –assume foo.cgi=text/html
@iN internet protocol (0=both ipv6+ipv4, 4=ipv4 only, 6=ipv6 only) (–protocol[=N])
%w? disable a specific external mime module (-%w htsswf -%w htsjava) (–disable-module <param>)

Browser ID:
F? user-agent field sent in HTTP headers (-F “user-agent name”) (–user-agent <param>)
%R? default referer field sent in HTTP headers (–referer <param>)
%E? from email address sent in HTTP headers (–from <param>)
%F? footer string in Html code (-%F “Mirrored [from host %s [file %s [at %s]]]” (–footer <param>)
%l? preffered language (-%l “fr, en, jp, *” (–language <param>)
%a? accepted formats (-%a “text/html,image/png;q=0.9,*/*;q=0.1” (–accept <param>)
%X? additional HTTP header line (-%X “X-Magic: 42” (–headers <param>)

Log, index, cache
C? create/use a cache for updates and retries (C0 no cache,C1 cache is prioritary,* C2 test update before) (–cache[=N])
k? store all files in cache (not useful if files on disk) (–store-all-in-cache)
%n? do not re-download locally erased files (–do-not-recatch)
%v? display on screen filenames downloaded (in realtime) – * %v1 short version – %v2 full animation (–display)
Q? no log – quiet mode (–do-not-log)
q? no questions – quiet mode (–quiet)
z? log – extra infos (–extra-log)
Z? log – debug (–debug-log)
v? log on screen (–verbose)
f *log in files (–file-log)
f2 one single log file (–single-log)
I *make an index (I0 don’t make) (–index)
%i? make a top index for a project folder (* %i0 don’t make) (–build-top-index)
%I? make an searchable index for this mirror (* %I0 don’t make) (–search-index)

Expert options:
pN priority mode: (* p3) (–priority[=N])
p0 just scan, don’t save anything (for checking links)
p1 save only html files
p2 save only non html files
*p3 save all files
p7 get html files before, then treat other files
S? stay on the same directory (–stay-on-same-dir)
D *can only go down into subdirs (–can-go-down)
U? can only go to upper directories (–can-go-up)
B? can both go up&down into the directory structure (–can-go-up-and-down)
a *stay on the same address (–stay-on-same-address)
d? stay on the same principal domain (–stay-on-same-domain)
l? stay on the same TLD (eg: .com) (–stay-on-same-tld)
e? go everywhere on the web (–go-everywhere)
%H? debug HTTP headers in logfile (–debug-headers)

Guru options: (do NOT use if possible)
#X *use optimized engine (limited memory boundary checks) (–fast-engine)
#0? filter test (-#0 ‘*.gif’ ‘www.bar.com/foo.gif’) (–debug-testfilters <param>)
#1? simplify test (-#1 ./foo/bar/../foobar)
#2? type test (-#2 /foo/bar.php)
#C? cache list (-#C ‘*.com/spider*.gif’ (–debug-cache <param>)
#R? cache repair (damaged cache) (–repair-cache)
#d? debug parser (–debug-parsing)
#E? extract new.zip cache meta-data in meta.zip
#f? always flush log files (–advanced-flushlogs)
#FN maximum number of filters (–advanced-maxfilters[=N])
#h? version info (–version)
#K? scan stdin (debug) (–debug-scanstdin)
#L? maximum number of links (-#L1000000) (–advanced-maxlinks[=N])
#p? display ugly progress information (–advanced-progressinfo)
#P? catch URL (–catch-url)
#R? old FTP routines (debug) (–repair-cache)
#T? generate transfer ops. log every minutes (–debug-xfrstats)
#u? wait time (–advanced-wait)
#Z? generate transfer rate statictics every minutes (–debug-ratestats)

Dangerous options: (do NOT use unless you exactly know what you are doing)
%!? bypass built-in security limits aimed to avoid bandwidth abuses (bandwidth, simultaneous connections) (–disable-security-limits)
IMPORTANT NOTE: DANGEROUS OPTION, ONLY SUITABLE FOR EXPERTS
USE IT WITH EXTREME CARE

Command-line specific options:
V execute system command after each files ($0 is the filename: -V “rm \$0”) (–userdef-cmd <param>)
%W use an external library function as a wrapper (-%W myfoo.so[,myparameters]) (–callback <param>)

Details: Option N
N0 Site-structure (default)
N1 HTML in web/, images/other files in web/images/
N2 HTML in web/HTML, images/other in web/images
N3 HTML in web/,? images/other in web/
N4 HTML in web/, images/other in web/xxx, where xxx is the file extension (all gif will be placed onto web/gif, for example)
N5 Images/other in web/xxx and HTML in web/HTML
N99 All files in web/, with random names (gadget !)
N100 Site-structure, without www.domain.xxx/
N101 Identical to N1 exept that “web” is replaced by the site’s name
N102 Identical to N2 exept that “web” is replaced by the site’s name
N103 Identical to N3 exept that “web” is replaced by the site’s name
N104 Identical to N4 exept that “web” is replaced by the site’s name
N105 Identical to N5 exept that “web” is replaced by the site’s name
N199 Identical to N99 exept that “web” is replaced by the site’s name
N1001 Identical to N1 exept that there is no “web” directory
N1002 Identical to N2 exept that there is no “web” directory
N1003 Identical to N3 exept that there is no “web” directory (option set for g option)
N1004 Identical to N4 exept that there is no “web” directory
N1005 Identical to N5 exept that there is no “web” directory
N1099 Identical to N99 exept that there is no “web” directory
Details: User-defined option N
‘%n’ Name of file without file type (ex: image)
‘%N’ Name of file, including file type (ex: image.gif)
‘%t’ File type (ex: gif)
‘%p’ Path [without ending /] (ex: /someimages)
‘%h’ Host name (ex: www.someweb.com)
‘%M’ URL MD5 (128 bits, 32 ascii bytes)
‘%Q’ query string MD5 (128 bits, 32 ascii bytes)
‘%k’ full query string
‘%r’ protocol name (ex: http)
‘%q’ small query string MD5 (16 bits, 4 ascii bytes)
‘%s?’ Short name version (ex: %sN)
‘%[param]’ param variable in query string
‘%[param:before:after:empty:notfound]’ advanced variable extraction
Details: User-defined option N and advanced variable extraction
%[param:before:after:empty:notfound]
param : parameter name
before : string to prepend if the parameter was found
after : string to append if the parameter was found
notfound : string replacement if the parameter could not be found
empty : string replacement if the parameter was empty
all fields, except the first one (the parameter name), can be empty

Details: Option K
K0? foo.cgi?q=45? ->? foo4B54.html?q=45 (relative URI, default)
K???????????????? ->? http://www.foobar.com/folder/foo.cgi?q=45 (absolute URL) (–keep-links[=N])
K3??????????????? ->? /folder/foo.cgi?q=45 (absolute URI)
K4??????????????? ->? foo.cgi?q=45 (original URL)
K5??????????????? ->? http://www.foobar.com/folder/foo4B54.html?q=45 (transparent proxy URL)

Shortcuts:
–mirror????? <URLs> *make a mirror of site(s) (default)
–get???????? <URLs>? get the files indicated, do not seek other URLs (-qg)
–list?? <text file>? add all URL located in this text file (-%L)
–mirrorlinks <URLs>? mirror all links in 1st level pages (-Y)
–testlinks?? <URLs>? test links in pages (-r1p0C0I0t)
–spider????? <URLs>? spider site(s), to test links: reports Errors & Warnings (-p0C0I0t)
–testsite??? <URLs>? identical to –spider
–skeleton??? <URLs>? make a mirror, but gets only html files (-p1)
–update????????????? update a mirror, without confirmation (-iC2)
–continue??????????? continue a mirror, without confirmation (-iC1)

–catchurl??????????? create a temporary proxy to capture an URL or a form post URL
–clean?????????????? erase cache & log files

–http10????????????? force http/1.0 requests (-%h)

Details: Option %W: External callbacks prototypes
see htsdefines.h

example: httrack www.someweb.com/bob/
means:?? mirror site www.someweb.com/bob/ and only this site

example: httrack www.someweb.com/bob/ www.anothertest.com/mike/ +*.com/*.jpg -mime:application/*
means:?? mirror the two sites together (with shared links) and accept any .jpg files on .com sites

example: httrack www.someweb.com/bob/bobby.html +* -r6
means get all files starting from bobby.html, with 6 link-depth, and possibility of going everywhere on the web

example: httrack www.someweb.com/bob/bobby.html –spider -P proxy.myhost.com:8080
runs the spider on www.someweb.com/bob/bobby.html using a proxy

example: httrack –update
updates a mirror in the current folder

example: httrack
will bring you to the interactive mode

example: httrack –continue
continues a mirror in the current folder

HTTrack version 3.48-24
Copyright (C) 1998-2016 Xavier Roche and other contributors

功能是很多很多的噢、上面的example有使用方法,案例。试用一下:

root@0535code:~# httrack http://0535code.com/index.php
WARNING! You are running this program as root!
It might be a good idea to run as a different user
Mirror launched on Tue, 18 Oct 2016 22:26:24 by HTTrack Website Copier/3.48-24 [XR&CO’2014]
mirroring http://0535code.com/index.php with the wizard help..
^C0535code.com/tag/js (23999 bytes) – OK

然后会在当前工作目录自动生成0535code.com文件夹噢。爬取完毕后,打开0535code.com里面的首文件即可。
 

KaLi Exploit Database searchsploit 查询漏洞

Usage: searchsploit [options] term1 [term2] … [termN]
Example: searchsploit oracle windows local

=======
Options
=======

-c??????????????? Perform case-sensitive searches; by default, searches will
try to be greedy
-h, –help??? Show help screen
-v??????????????? By setting verbose output, description lines are allowed to
overflow their columns

*NOTES*
Use any number of search terms you would like (minimum of one).
Search terms are not case sensitive, and order is irrelevant.

使用一下:
root@0535coder:~# searchsploit dede
Description??????????????????????????????????? Path
——————————————— ———————————-
DedeCMS 5.1 – SQL Injection????????????????? | /php/webapps/9876.txt
Dede CMS All Versions SQL Injection Vulnerab | /php/webapps/18292.txt
DeDeCMS 5.5 ‘_SESSION[dede_admin_id]’ Parame | /php/webapps/33685.html

很强大噢、都是EXP,攻击数据库里面保存的攻击细节。

root@0535coder:~# locate searchsploit
/usr/bin/searchsploit
/usr/share/applications/kali-searchsploit.desktop
/usr/share/exploitdb/searchsploit
/usr/share/kali-menu/applications/kali-searchsploit.desktop
/usr/share/man/man1/searchsploit.1.gz
exp存放目录为:/usr/share/exploitdb/platforms/
exp数据目录为:/usr/share/exploitdb/Files.csv
貌似默认的版本比较旧了,要升级下:
Exploit-Database官方GitHub仓库:https://github.com/offensive-security/exploit-database
把searchsploit、files.csv、platforms复制进 /usr/share/exploitdb/目录即可。
 
补充一个360的开源漏洞库:
http://webscan.360.cn/vul 在这里也可以查开源程序的漏洞,只是没有漏洞细节,要自己去查找漏洞噢、至少知道用的开源程序都存在那些安全漏洞。有多少高危漏洞。

KaLi WEB爬行工具 dirb 使用

—————–
DIRB v2.21
By The Dark Raver
—————–

./dirb <url_base> [<wordlist_file(s)>] [options]

========================= NOTES =========================
<url_base> : Base URL to scan. (Use -resume for session resuming)
<wordlist_file(s)> : List of wordfiles. (wordfile1,wordfile2,wordfile3…)

======================== HOTKEYS ========================
‘n’ -> Go to next directory.
‘q’ -> Stop scan. (Saving state for resume)
‘r’ -> Remaining scan stats.

======================== OPTIONS ========================
-a <agent_string> : Specify your custom USER_AGENT.
-c <cookie_string> : Set a cookie for the HTTP request.
-f : Fine tunning of NOT_FOUND (404) detection.
-H <header_string> : Add a custom header to the HTTP request.
-i : Use case-insensitive search.
-l : Print “Location” header when found.
-N <nf_code>: Ignore responses with this HTTP code.
-o <output_file> : Save output to disk.
-p <proxy[:port]> : Use this proxy. (Default port is 1080)
-P <proxy_username:proxy_password> : Proxy Authentication.
-r : Don’t search recursively.
-R : Interactive recursion. (Asks for each directory)
-S : Silent Mode. Don’t show tested words. (For dumb terminals)
-t : Don’t force an ending ‘/’ on URLs.
-u <username:password> : HTTP Authentication.
-v : Show also NOT_FOUND pages.
-w : Don’t stop on WARNING messages.
-X <extensions> / -x <exts_file> : Append each word with this extensions.
-z <milisecs> : Add a miliseconds delay to not cause excessive Flood.

======================== EXAMPLES =======================
./dirb http://url/directory/ (Simple Test)
./dirb http://url/ -X .html (Test files with ‘.html’ extension)
./dirb http://url/ /usr/share/dirb/wordlists/vulns/apache.txt (Test with apache.txt wordlist)
./dirb https://secure_url/ (Simple Test with SSL)

试用方法:

dirb http://seofangfa.com #默认不选择目录,会加载默认的字典,也可以自己指定字典,如下
dirb http://seofangfa.com /usr/share/dirb/wordlists/indexes.txt

有的版本中没有默认字典文件(/usr/share/dirb/wordlists/common.txt),需要加 -f 参数指定字典文件路径,如下:
dirb http://0535code.com? -f /usr/share/dirb/wordlists/indexes.txt
或者是存在字典的情况下,缺少 -f 参数,则要使用:dirb http://0535code.com? -f

KaLi WEB爬行工具 cutycap 使用

—————————————————————————–
Usage: CutyCapt –url=http://www.example.org/ –out=localfile.png
—————————————————————————–
–help???????????????????????? Print this help page and exit
–url=<url>??????????????????? The URL to capture (http:…|file:…|…)
–out=<path>?????????????????? The target file (.png|pdf|ps|svg|jpeg|…)
–out-format=<f>?????????????? Like extension in –out, overrides heuristic
–min-width=<int>????????????? Minimal width for the image (default: 800)
–min-height=<int>???????????? Minimal height for the image (default: 600)
–max-wait=<ms>??????????????? Don’t wait more than (default: 90000, inf: 0)
–delay=<ms>?????????????????? After successful load, wait (default: 0)
–user-style-path=<path>?????? Location of user style sheet file, if any
–user-style-string=<css>????? User style rules specified as text
–header=<name>:<value>??????? request header; repeatable; some can’t be set
–method=<get|post|put>??????? Specifies the request method (default: get)
–body-string=<string>???????? Unencoded request body (default: none)
–body-base64=<base64>???????? Base64-encoded request body (default: none)
–app-name=<name>????????????? appName used in User-Agent; default is none
–app-version=<version>??????? appVers used in User-Agent; default is none
–user-agent=<string>????????? Override the User-Agent header Qt would set
–javascript=<on|off>????????? JavaScript execution (default: on)
–java=<on|off>??????????????? Java execution (default: unknown)
–plugins=<on|off>???????????? Plugin execution (default: unknown)
–private-browsing=<on|off>??? Private browsing (default: unknown)
–auto-load-images=<on|off>??? Automatic image loading (default: on)
–js-can-open-windows=<on|off> Script can open windows? (default: unknown)
–js-can-access-clipboard=<on|off> Script clipboard privs (default: unknown)
–print-backgrounds=<on|off>?? Backgrounds in PDF/PS output (default: off)
–zoom-factor=<float>????????? Page zoom factor (default: no zooming)
–zoom-text-only=<on|off>????? Whether to zoom only the text (default: off)
–http-proxy=<url>???????????? Address for HTTP proxy server (default: none)
—————————————————————————–
<f> is svg,ps,pdf,itext,html,rtree,png,jpeg,mng,tiff,gif,bmp,ppm,xbm,xpm
—————————————————————————–
http://cutycapt.sf.net – (c) 2003-2010 Bjoern Hoehrmann – bjoern@hoehrmann.de

这个截图工具蛮好用的噢。使用方式如下:
cutycapt –url=http://0535code.com/ –out=1.jpg,会把访问的网址保存为一张图片文件,根据上面的参数还可以自定义图片的尺寸和加载页面的参数等。

KaLi WEB爬行工具 apche-users 使用

USAGE: apache.pl [-h 1.2.3.4] [-l names] [-p 80] [-s (SSL Support 1=true 0=false)] [-e 403 (http code)] [-t threads]

-h #主机
-l #字典文件
-p #主机端口
-s #是否使用ssl
-e #http相应码
-t #线程数

使用案例:
apache-users -h 0535code.com -l /usr/share/dirbuster/wordlists/apache-user-enum-2.0.txt -p 80 -s 0 -e 403 -t 10
工具的如何完全取决于字典噢、感觉没有Dirb好用。