Archive for the ‘Security Research Tools’ Category

Suricata: A Next Generation IDS/IPS Engine

January 7, 2010

Suricata: A Next Generation IDS/IPS Engine: “

Last Thursday, I was very glad that the Open Information Security Foundation (OISF) released the first public beta version of Suricata. It has been three years in the making. Several new releases are expected this month culminating in a production quality release shortly thereafter. OISF describes Suricata an ‘an Open Source Next Generation Intrusion Detection and Prevention Tool, not intended to just replace or emulate the existing tools in the industry, but to bring new ideas and technologies to the field.’ It is looking very promising.

The Suricata Engine and the HTP Library are available to use under the GPLv2. The new engine supports ‘Multi-Threading, Automatic Protocol Detection (IP, TCP, UDP, ICMP, HTTP, TLS, FTP and SMB! ), Gzip Decompression, Fast IP Matching and coming soon hardware acceleration on CUDA and OpenCL GPU cards’. GPU integration allows the use of graphic cards to accelerate operations. Mike Cloppert in his post, ‘Detection, Bandwidth, and Moore’s Law’ pointed out:

It appears the authors well understand the point in this post, and the corresponding state of the art in solving parallel computing problems. GPU’s are emerging as a good commodity solution to parallel processing. This is covered in depth by a number of recent publications discussing parallelism, and I am by no means an expert in this field, so I will simply leave follow-up on this point as an exercise for the reader.

The HTP Library is an HTTP normalizer and parser written by Ivan Ristic, creator of Mod Security and author of the soon to be released book ‘ModSecurity Handbook‘. This integrates and provides very advanced processing of HTTP streams. The HTP library is required by the engine, but may also be used independently in a range of applications and tools. Additional details have been provided by Ivan in his post, ‘HTTP parser for intrusion detection and web application firewalls.’ Ivan writes concerning the development, ‘For the first release of the parser the goal is to be able to parse HTTP streams reliably. In the subsequent versions I will work in the parser’s security properties (such as the ability to see through evasion attacks).’

New Ideas and Concepts

Quoting from the OISF announcement, some of the next generation capabilities include:

  • Multi-Threading: so very necessary.
  • Automatic Protocol Detection: the engine has keywords for IP, TCP, UDP, ICMP, HTTP, TLS, FTP and SMB. Users can write rules to detect a match within a stream regardless of the port the stream occurs on. This is important for malware detection and control. Detections for more layer 7 protocols are being developed.
  • Gzip Decompression: the HTP Parser will decode Gzip compressed streams.
  • Independent HTP Library: the HTP Parser will be usable by other applications such as proxies, filters, etc. The parser is available as a library under GPLv2 for easy integration ito other tools.
  • Standard Input Methods: support for NFQueue, IPFRing, and the standard LibPcap to capture traffic. IPFW support will be available soon.
  • Unified2 Output: support for standard output tools and methods.
  • Flow Variables: it is possible to capture information out of a stream and save that in a variable which can then be matched against later.
  • Fast IP Matching: the engine will automatically use a special fast matching preprocessor on rules that are IP matches only (such as the RBN and compromised IP lists at Emerging Threats).
  • HTTP Log Module : HTTP requests can be automatically output into an apache-style log format file for monitoring and logging activity completely independent of rulesets and matching.

A few features to look forward to in a few weeks:

  • Global Flow Variables: the ability to store more information from a stream or match (actual data, not just setting a bit) over a period of time allowing comparing values across many streams and time.
  • Graphics Card Acceleration: using CUDA and OpenCL to make use of the processing power of even old graphics cards to accelerate the IDS. Offloading the very computationally intensive functions of the sensor will greatly enhance performance.
  • IP Reputation: will allow sensors and organizations to share intelligence and eliminate many false positives.
  • Windows Binaries: will be released once there is a reasonably stable body of code.

Folks Behind It

The team is listed on the OISF site. It is an all star cast including Matt Jonkman, Victor Julien, Will Metcalf, Nathan Jimerson, Margaret Skinner, Josh Smith, Brian Rectanus, Breno Silva Pinto, Anoop Saldanha, Gurvinder Singh Dahiya, Jason MacLulich, <a href="
“>Jason Ish, Kirby Kuehl, Dennis Henderson, <a href="
“>Martin Solum, Ivan Ristic, <a href="
“>Pablo Rincon, and Gerardo Iglesias Galvan.

I also wanted to point out some of the heavy hitting organizations involved. The initial funding for OISF comes from the US Department of Homeland Security (DHS), the US Navy’s Space and Warfare Command (SPAWAR), and a number of private companies that participate in the OISF Consortium. The OISF is a part of the DHS Homeland Open Security Technology (HOST) program. OISF works with Open Source Software Institute and has received legal guidance from the Software Freedom Law Center.

OISF is a US nonprofit, a 501c(3) and will not commercialize, sell, patent, copyright, or profit from the engine. OISF Consortium members are donating coders, equipment, and financial support in exchange for the ability to commercialize the engine. The important take away is that OISF has long term support for future development of Suricata.

Final Thoughts

Suricata is a very exciting and promising IDS/IPS engine. It has a great group of people behind it and future development appears secured. It is a project that is in the early stages. Do not expect to download it and simply install on a production environment. For testing the software and providing feedback, the engine and the HTP Library are available for download. To keep apprised of the latest developments join the oisf mailing lists where you discuss and share feedback. The blog of Victor Julien, Suricata’s lead developer, is another great source for the latest news and information.

To finally answer the burning question: why the name Suricata? According to the OISF site, Suricata comes from the Latin genus name for the meerkat and ‘the Meerkat takes security and vigilance as a life or death responsibility. There is always at least one individual on guard, watching, ready to alert the entire organization. Very much like an IDS sensor. It is always watching, always ready to alert you to danger. Or something like that…’

(Via System Advancements at the Monastery.)

Shared Threat Monitoring Protects Enterprise

December 11, 2009

Shared Threat Monitoring Protects Enterprise: “

By Michael O’Connor, President of IronClad Consulting

Recently, as detailed by Anthony Freed of, Larry Clinton of the Internet Security Alliance presented information to Congress regarding security and protecting privacy in cyberspace.

First of all, it is encouraging to hear that these kinds of discussions are being presented in D.C. Thanks to Larry Clinton and his team for representing these very important issues.

I agree with the feel of Larry’s suggestions — that it is not necessarily ‘compliance’ that will resolve our concerns, and that more practical means must be established.

If this is so, I would recommend ongoing monitoring as the key. And if monitoring is the key, how does this affect businesses, individuals, and personal privacy?

And what role does government play, if any? Can we balance good monitoring and security with privacy?

My laptop is monitored constantly by security software. In return for the service, I voluntarily give up some information.

However, this information is about my system and not me personally (other than standard billing info, which is public anyway, minus the credit card data).

Do you think a similar solution could be implemented business-wide, to help monitor and keep businesses free from harmful attacks?

Perhaps ‘compliance’, in such a model, would be gained by agreeing to opt in to the monitoring system.

Going along with one of Larry’s future objectives – information sharing – threats exposed in such a system could become immediately beneficial to other businesses that are hooked in.

Some companies are already attempting this strategy. The general concept is to create a sort of ‘reputation’ around the data elements of the transaction.

The more unique the data elements and the more clients use (and contribute to) the reputation, the more valuable the reputation becomes.

Reputation can be tied to elements such as an IP address (as with MaxMind), a ‘client device ID’ (CDI, as with 41st parameter, Kount, or iovation), a credit card number (as with Visa’s neural network), and so on.

Ostensibly, the most unique and valuable data element would be the client device ID.

It provides a much more concrete identification mechanism than the other, dynamic and changeable elements such as email address, shipping/billing address, name, phone number, etc.

Thus, gathering these – and especially sharing them – would provide an excellent foundation for a monitoring system.

Ideally, both government and private sectors would contribute to the system, which would provide real-time updates and warnings concerning devices that were previously known to be used in fraudulent activities.

But what of privacy concerns?

An intrinsic benefit of CDI is that it does not hold Personally Identifiable Information (PII) within it.

You’re just looking at the device – and ideally the reputation surrounding it – rather than the person or private information behind the device.

The privacy concern becomes moot.

Granted, any client looking at the transaction has private information on their end (a retailer looking at the invoice, for example), and they could easily connect the PII and CDI together for their own purposes, but the PII portion would not be shared within the overarching monitoring system.

Moving full-circle back to the role of government, were they to adopt such a monitoring system and require that businesses take part in it as a requirement for a new kind of security ‘compliance’, we might see a positive shift from the bookshelf-breaking paper-based compliance of the past.

*   *   *

Stay Informed With ISR News Alerts:


by FeedBurner

*   *   *

Follow us on Twitter

*   *   *

Michael O’Connor has been working in various operational management positions since 1994, and with online payment in particular since 2000. In 2003 he began a focused foray into fraud prevention while leading a team at, where they prevented millions of dollars in potential fraud losses from hitting the company’s bottom line. Michael was also fortunate enough to have served on the advisory board of the Merchant Risk Council and assisting in the training of an FBI CyberCrimes unit. Ironclad’s core objective is to make businesses safer and profitable by providing unbiased consultation in the areas of payment facilitation, compliance, risk assessment, and fraud prevention best practices. The threats are inbound. Are you Ironclad?™

The Publisher gives permission to link, post, distribute, or reference this article for any lawful purpose, provided attribution is made to the author and to

(Via Information Security Resources.)

Shodan: Another Step Towards Intrusion as a Service

November 25, 2009

Shodan: Another Step Towards Intrusion as a Service: “If you haven’t seen Shodan yet, you’re probably not using Twitter as a means to stay current on security issues. Shoot, I don’t even follow anyone and I heard about it.

Basically a programmer named John Matherly scanned a huge swath of the Internet for certain TCP ports (80, 21, 23 at least) and published the results in a database with a nice Web front-end. This means you can put your mind in Google hacking mode, find vulnerable platforms, maybe add in some default passwords (or not), and take over someone’s system. We’re several steps along the Intrusion as a Service (IaaS) path already!

Incidentally, this idea is not new. I know at least one company that sold a service like this in 2004. The difference is that Shodan is free and open to the public.

Shodan is a dream for those wanting to spend Thanksgiving looking for vulnerable boxes, and a nightmare for their owners. I would not be surprised if disappears in the next few days after receiving a call or two from TLAs or LEAs or .mil’s. I predict a mad scramble by intruders during the next 24-48 hours as they use Shodan to locate, own, and secure boxes before others do.

Matt Franz asked good questions about this site in his post Where’s the Controversy about Shodan? Personally I think Shodan will disappear. Many will argue that publishing information about systems is not a problem. We hear similar arguments from people defending sites that publish torrents. Personally I don’t have a problem with Shodan or torrent sites. From a personal responsibility issue it would have been nice to delay notification of Shodan until after Thanksgiving.

(Via TaoSecurity.)

Extending Security Event Correlation

November 16, 2009

Extending Security Event Correlation: “Last year at this time I wrote a series of posts on security event correlation. I offered the following definition in the final post:

Security event correlation is the process of applying criteria to data inputs, generally of a conditional (‘if-then’) nature, in order to generate actionable data outputs.

Since then what I have found is that products and people still claim this as a goal, but for the most part achieving it remains elusive.

Please also see that last post for what SEC is not, i.e., SEC is not simply collection (of data sources), normalization (of data sources), prioritization (of events), suppression (via thresholding), accumulation (via simple incrementing counters), centralization (of policies), summarization (via reports), administration (of software), or delegation (of tasks).

So is SEC anything else? Based on some operational uses I have seen, I think I can safely introduce an extension to ‘true’ SEC: applying information from one or more data sources to develop context for another data source. What does that mean?

One example I saw recently (and this is not particularly new, but it’s definitely useful), involves NetWitness 9.0. Their new NetWitness Identity function adds user names collected from Active Directory to the meta data available while investigating network traffic. Analysts can choose to review sessions based on user names rather than just using source IP addresses.

This is certainly not an ‘if-then’ proposition, as sold by SIM vendors, but the value of this approach is clear. I hope my use of the word ‘context’ doesn’t apply to much historical security baggage to this conversation. I’m not talking about making IDS alerts more useful by knowing the qualities of a target of server-side attack, for example. Rather, to take the case of a server side attack scenario, imagine replacing the source IP with the country ‘Bulgaria’ and the target IP with ‘Web server hosting Application X’ or similar. It’s a different way for an analyst to think about an investigation.

(Via TaoSecurity.)

Hack From A Cave – pwntooth

November 11, 2009

Hack From A Cave: ”


pwntooth (pown-tooth) is designed to automate Bluetooth Pen-Testing. It scans for devices, then runs the tools specified in the pwntooth.conf; included blueper, bluesnarfer, Bluetooth Stack Smasher (BSS), carwhisperer, psm_scan, rfcomm_scan, and vcardblaster.

pwntooth is a fully automated ‘search and destroy’ tool for advanced users who wish to run a series of tests against each device in the target area. While there are some pre-configured lines in the pwntooth.conf file, it is mostly designed for users to specify their own pen-testing configuration. pwntooth can be used in conjunction with mant other tools not included in the package.

Name: pwntooth-0.2.2.tar.gz
Size: 6.7 MB
MD5: a84a2b59a6253f52a2e74cdca000995b
Download: Click Here

v0.2.2 – 08/11/09 :
– fixed makefile for tools.

v0.2.1 “

(Via .)

Recovering Firefox Passwords for World Domination

October 8, 2009

Recovering Firefox Passwords for World Domination: ”

To quote Carlos ‘dark0perator’ Perez, ‘shell is just the beginning’. Now that we have access to a machine, we can gather all sorts of goodies, we just need to know where to look.

Firefox.jpg Some of my favorite local system information gathering techniques include grabbing Firefox stored passwords. Prior to version 3.5, (for version 3) the list of sites and associated passwords were stored in signons3.txt. If a master password is set you also need the file ‘key3.db’ as it will allow you to unlock the password store. For Firefox versions 3.5 or better, you need to acquire the file ‘signons.sqlite’. For a detailed description of the contents and format of each of these files, check out the FirePassword page.

But why recover these usernames and passwords? How many people do you know let their browser store passwords for them? Personally, I know a lot. Users store passwords for just about everything; personal sites, banking and corporate resources.

Yes, corporate resources. If you have credentials to these resources, this may open up a whole new world to your testing. Imagine that you now have credentials to web based management utilities allowing access to a million credit card numbers (or something as equally juicy such as social security numbers).

So how do we do it? Ok, first grab the signons3.txt and key3.db files (or signins.sqlite for Firefox 3.5) and get them to a system where you can work with them. I’m finding that a windows system is best, given the tools available. I’m using Windows 7 in a VM, with firefox installed. Many of the tools like to look for the default Firefox profile directory, so I often copy the files there – I’m not concerned about the install of firefox in this VM.

The Firefox browser itself can be used to view the passwords in the password store. Firefox 3.5 uses a different format for storing passwords; they now store them in a sqllite database. If we copy over the files (signons3.txt and key3.db) to the default firefox profile (C:\Documents and Settings\[user]\Application Data\Mozilla\Profiles\[random].profle in many cases) run Firefox, and go to Tools -> Options -> Security -> Saved Passwords -> Show Passwords we can see them in plain text. Neat, now we have the URL, username and password! But wait, you mean now we are being asked for a master password? Well, we need to provide one in order to view the passwords!

We can use FireMaster to obtain the master password. FireMaster is a Windows-based master password brute force tool, and operates against key3.db and signons3.txt. It will do all of the typical brute force attacks; dictionary, hybrid, and bruteforce. It is a fairly simple tool to use, but here are a few examples. In these examples, Firemaster is in the same directory as key3.db and signons3.txt so my profile path is set as ‘.’ at the end of the command:

[Update: During the writing of this segment, I noted that the author updated FireMaster so automatically detect the version of Firefox based on the storing of the information in signons3.txt or the sqlite method! We can now use this tool to get the goods from Firefox 3.5 as well.]

Below is an example of a dictionary attack:

FireMaster.exe -d -f wordlist.txt .

Note that you need to be careful with your wordlist. I used a copy of the all inclusive free version from which I had to convert LF to CRLF. I also had to remove words with spaces and non US character sets. If I didn’t I got a nasty crash from FireMaster. Can you say potential buffer overflow anyone?


Below is an example of a hybrid attack:

Firemaster.exe -h -f wordlist.txt -n 3 -g '0123456789' -s -p .

Again, same wordlist issues. With the hybrid, it will append (-s) and prepend (-p) the number of characters (-n 3) as defines by the defined character set (-g). The larger your number of characters and character sets the more time you will need.

Below is an example of a brute force attack:

FireMaster.exe -b -l 10 .

This one will set the max password length to 10 characters (-l), so adjust to you needs. It also uses the default character set of ‘abcdefghijklmnopqrstuvwxyz*@#!$123’ which you may also need to tailor with the -g option. On my machine this would take over 300,000 days to complete at about 120,000 guesses a second. On a high end, non-virtual system the guessing jumped up to about 250,000 guesses a second for about 160,000 days to completion.


My vote is for a good dictionary. We covered scraping websites for making custom wordlists in Episode 129 of the podcast.

I’ve also had some good luck with Firefox Password recovery from Granted, it wasn’t free, but the $18 was something I could afford for expenses on an engagement. It won’t crack or bypass the master password, but may be a little more safe than a machine running an old version of Firefox. Just another option. It hasn’t been updated for Firefox versions 3.5 or better signons.sqlite yet.

So, want a free solution? The author of FireMaster has a command line FirePass and GUI FirePasswordViewer tool to do the same, with Firefox 3.5 support! Start recovering and use the results responsibly (and with permission)!

– Larry ‘haxorthamtrix’ Pesce

(Via PaulDotCom.)

Sandboxie 3.40 Released

October 5, 2009

Sandboxie 3.40 : ”

Sandboxie runs your programs in an isolated space which prevents them from making permanent changes to other programs and data in your computer.

Benefits of the Isolated Sandbox
Secure Web Browsing: Running your Web browser under the protection of Sandboxie means that all malicious software downloaded by the browser is trapped in the sandbox and can be discarded trivially.
Enhanced Privacy: Browsing history, cookies, and cached temporary files collected while Web browsing stay in the sandbox and don’t leak into Windows.
Secure E-mail: Viruses and other malicious software that might be hiding in your email can’t break out of the sandbox and can’t infect your real system.
Windows Stays Lean: Prevent wear-and-tear in Windows by installing software into an isolated sandbo”

(Via .)

Breach Analysis Portal

September 30, 2009