Monday, December 20, 2010

Automated Vulnerability Research



A paper just hit the Web this month describing a methodology for automated analysis of source/binary code for vulnerabilities and automatically generating a correlating, working exploit. The paper was done by Thanassis Avgerinos at Carnegie Mellon as part of his graduate research. You can view a video and the paper at the site they set up for this area.

It's an interesting paper that unfortunately received mostly negative comments from the "hacking" community (ie, DailyDave, Sean/Sotirov/others on Twitter, etc.). Mostly I believe that's because that community has been trained to point out failures in general (applications, operating systems, enterprises, irrelevant academic research, etc.) The outsider/attacker mindset can make it difficult to accept ideas not from your trusted circle of friends. In fact, the individual most outspoken against the research (Sean Heelan, a researcher from Ireland now with Immunity wrote his 2009 master's thesis on a similar, but less ambitious topic: "Automatic Generation of Control Flow Hijacking Exploits for Software Vulnerabilities".

The new paper is an excellent read and incorporates preconditioned symbolic execution with an end-to-end system to demonstrate it is possible to take source and binary pairs and automatically develop a working exploit (in bounded, fairly simple) cases. They properly acknowledge the prior work by Brumley (using patch-based approaches) and Heelan and point out the remaining work (which is significant!) And as the hacking community has shown, it becomes significantly more difficult to scale this and automate for more complex scenarios.

As we move to an era where Artificial Intelligence is able to automatically translate language, discover laws of physics, learn to read, it doesn't seem far fetched to automatically  fuzz software and develop exploit code. Oh, wait... we can do that now. The real challenge for this research is not solving the simple case, but showing how it could tackle more complex problems and avoiding attempts to minimize their difficulty. (Which the author did originally, causing many to ignore or overlook his work.)

Humans will presumably stay ahead of AI for the near future, but we've already lost at chess. It seems to me the only  reason computers can't find bugs in software better than the best human is we haven't programmed them to do so yet.

Friday, October 1, 2010

Stuxnet: Military-grade SCADA weapon

Stuxnet was uncovered over the summer and as details have been forthcoming it is a compelling piece of work. Stuxnet is a self-propagating worm designed to target a particular SCADA facility utilizing Siemens WinCC/Step7 software and targets associated PLC with a particular as-yet-unknown payload. The majority of the compromises were in Iran but other countries, such as Germany, Russia and the US have been infected as well. Some of the quotes from those conducting analysis: "Hack of the century", "nation-state weapons-grade attack software", "will be the most analyzed piece of malware ever", are not unique quotes. From an article titled "Is Stuxnet the 'best' malware ever?":
"It's amazing, really, the resources that went into this worm," said Liam O Murchu, manager of operations with Symantec's security response team.
"I'd call it groundbreaking," said Roel Schouwenberg, a senior antivirus researcher at Kaspersky Lab. By comparison, other notable attacks, like the one dubbed "Aurora" that hacked Google's network, and those of dozens of other major companies, was child's play.
To summarize some of the reasons that it has engendered such praise:
Any one of those capabilities with sufficient penetration would be enough to garner interest. But combining all of them is a generation or two ahead of anything ever seen in the wild before.  The code is of interest from a purely technical achievement perspective, but has significant implications for nations and other stakeholders in potential cyberwarfare.

Recently, multiple different parties conducting analysis (or reviewing the public analysis) have concluded that the attack was likely against an Iranian nuclear facility, either the Bushehr nuclear power plant or the Uranium enrichment facilities in Natanz and likely originated in Israel. I'll attempt to summarize the arguments below:
  • Almost 60% of the infections are in Iran according to Symantec (who took over the C2 server)
  • The SCADA/PLC payload doesn't get activated unless the particular network fingerprint is found. None of the systems infected with Stuxnet have been reporting (of course, they might hide/cover it up if they were targeted) that this fingerprint matched. Given the investment it appears likely that a particular high value network was targeted.
  • Guesswork from multiple parties wondering what high value systems might be targeted in Iran quickly jumped to nuclear facilities. Arguments for Bushehr (here, here, and a screenshot of their HMI showing their Siemens WinCC license here) and Natanz are available and have been picked up across technical web sites, the blogosphere and increasingly even the mainstream media. Of course, depending on how unique the target fingerprinting is (and the fact there are confirmed to be at least four variants) it's possible the answer is both of them.
  • Israel was connected due to their obvious interest in delaying/destroying/disrupting the Iranian nuclear program, their cyberwarfare capabilities (also articles here and here) and cyber security expertise and a clue in the code. Specifically, the word "myrtus" (meaning "myrtle") is the name of the root directory for the exploit code. That was picked up by Kaspersky but they didn't grasp the meaning. However, the guys at DigitalBond noticed that in Hebrew this was the original name of the Biblical character Esther, who saved the Jewish race from extinction from a hostile (Persian) nation. NYT picked up on this recently as well. Could always be a false lead as well, but a rather advanced one if so. Update: At the VB2010 Liam Murchu presented a more detailed analysis which included the "already infected" registry key that Stuxnet uses to prevent multiple infections. The marker was 19790509. Wikipedia points out that that was the date that Habib (Habibollah) Elghanian, an Israeli businessman,  was killed by the new Islamic Iranian regime for "corruption", "contacts with Israel and Zionism", "friendship with the enemies of God", "warring with God and his emissaries", and "economic imperialism". He was the first jew and one of the first civilians killed by the new government.
It appears that the world is seeing the a major salvo in real nation-nation cyber warfare activity. (As opposed to all of the intelligence/espionage activities that have gone in the past, which are not acts of war). Numerous, subtle signs point towards an Israeli originated attack against Iranian nuclear facilities. But it is certainly possible these indications could be placed there on purpose, hoping people would discover them and point towards Israel. Either way, it is highly unlikely that if anyone does know what the target was, AND validate where the attack originated, that it will ever be published for various geopolitical reasons. In the mean time, it provides plenty of fodder for armchair analysts and conspiracy theorists to speculate regarding the true intent and origin of Stuxnet.

One final note: Stuxnet is probably NOT the first acknowledged/published nation-nation cyber attack. Rumors have been around for quite a while regarding the US providing a "trojan horse" to the Russians, resulting in a Siberian pipeline explosion, but it sounds like it's moved into the acknowledged realm now.

December 7th, 2010 Update: At this point the rest of the post has been essentially validated by public research and acknowledgments. There is no speculation that Stuxnet was designed to affect a particular high frequency drive designed in Iran, deployed in Iran for their nuclear program. And that it had at least moderate success. And it appears clear that well organized individuals remain motivated to attack the Iranian nuclear program, via more traditional means. I'll probably update this one last time in 3-4 months with any of the more interesting fallout implications. The specific new evidence/events:
  • Symantec, with some help from a Dutch company completed the analysis of the PLC payload and published the results on November 12th. It found that it was targeted at very specific high frequency drive controllers manufactured in Finland and Iran. These devices have limited applications, (with centrifuges being one of them), causing them to be on the list of export controlled devices as a result. The Finnish company denies exporting them. The payload is, as was assumed, designed to render the targeted devices unreliable and cause them to malfunction in a way that would degrade/destroy the targeted drive and manufacturing process. 
  • Iran's leader Mahmoud Ahmadinejad confirmed on November 29th that its centrifuges were indeed hit and negatively impacted by Stuxnet. IAEA confirmed that enrichment activities were shut down (at least temporarily).
  • On the same day, the top Iranian expert for Stuxnet (and one of their most senior nuclear researchers) was assassinated November 29th. A second researcher was targeted that same day but the attack did not kill him or his wife and only caused injuries. Iran has developed a special security service to attempt to mitigate these physical attacks in the future.

Tuesday, June 29, 2010

French cyber activities

Some interesting stuff on the web recently regarding the French. This pretty interesting article (translated from Spanish) describing France's 6 laboratories developing offensive capabilities. I can't speak to the authenticity/sources of the article, or what was lost in the translation from Spanish to English so treat the information with a grain of skepticism. One of the more interesting (and confusing) paragraphs is below. (I think the retroconcepcion section is reverse engineering)
Moreover, at least one specialized cell of the air base in Creil BA 110, just north of Paris, works on the development of digital weapons of combat, according to industry sources. In another area, the DGSE, the leading French foreign intelligence service, has received budget to hire 100 computer engineers per year, for covert operations on servers outside penetration.  Especially sought after are the specialists in downgrading (able to turn invisible so a secure protocol to another one that is a little less) in retroconcepciĆ³n (apart, like an engine in a garage, the system of algorithms of the enemy to understand), exploration vulnerabilities, penetration systems poachers and obfuscated code (operating systems whose lines of code are designed to be incomprehensible). Applications of these experts, by the way, are accepted by mail only, and not by email.
The article makes the point that development of these capabilities would not be allowed under their laws, so they are pursuing it under a penetration testing justification.
The six private laboratories with state control, called ITSEC, have received authorization to develop "digital weapons" under a legal quirk As they try to enter a computer system or destroy others is an offense under French criminal code, they could not give a general permit.  Therefore, the General Secretariat of National Defense (SGDN) gave it under a euphemism: the ITSEC, as part of his work on defense systems against cyber attacks, have the right to develop "penetration tests." And, obviously, to perform these tests, need to develop and maintain "digital weapons" to penetrate. Offensive, then.
They also include a quote that claims they are ten years behind China, (which would appear unlikely)
On 9 June, Bernard Barbier, technical director of the DGSE, ie head of the agency's systems of action-intervention was very clear at the annual conference of Rennes SSTIC.  "France has 10 years behind China," he said, according to several sources tell present at the forum. It confirmed that Paris is going to go too fast.  Though: like most of the planned offensive operations are prohibited, shall be conducted in a furtive and, where possible, from outside France.
 Would be interested in seeing some documentation to support/refute these, preferably in English if anyone has seen them.

Friday, May 28, 2010

Predicting vulnerability

Great paper by Thomas Zimmerman and co. at Microsoft on the development of techniques/metrics to predict the number of vulnerabilities in a given binary. Not an incredible of productive research to date in this area predicting the number of remaining vulnerabilities, minimize them, or the time/complexity required to exploit them. This is critically important for software developers attempting to remove/reduce vulnerabilities, defenders deploying potentially highly vulnerable software (hello Adobe products!) and vulnerability researchers trying to optimize where to spend their energy and predict time to discovery.

From their abstract:
Many factors are believed to increase the vulnerability of software system; for example, the more widely deployed or popular is a software system the more likely it is to be attacked. Early identification of defects has been a widely investigated topic in software engineering research. Early identification of software vulnerabilities can help mitigate these attacks to a large degree by focusing better security verification efforts in these components. Predicting vulnerabilities is complicated by the fact that vulnerabilities are, most often, few in number and introduce significant bias by creating a sparse dataset in the population. As a result, vulnerability prediction can be thought of as the proverbial “searching for a needle in a haystack.” In this paper, we present a large-scale empirical study on Windows Vista, where we empirically evaluate the efficacy of classical metrics like complexity, churn, coverage, dependency measures, and organizational structure of the company to predict vulnerabilities and assess how well these software measures correlate with vulnerabilities. We observed in our experiments that classical software measures predict vulnerabilities with a high precision but low recall values. The actual dependencies, however, predict vulnerabilities with a lower precision but substantially higher recall.

Wednesday, May 19, 2010

Vulnerability Market Numbers

Great idea out of unsecurityresearch to do an anonymous survey of vulnerability researchers to identify their experiences interacting with the groups advertising vulnerability purchasing programs and direct buyers (anyone who buys vulnerabilities but doesn't maintain an advertised program).  I'll summarize some of the interesting results below.

iDefense bought the most bugs in total and was the slowest to pay. ZDI was second in both of those, and the slowest to make an offer. They also ranked the highest in "trustworthiness" and preference to sell to. SecuriTeam was second in preference and ranked the highest in "friendliness" with iDefense finishing last there.

I also did an analysis of the numbers by importing the totals into Excel. I discarded any data with less then two quantified samples (numbers over the quantified limit weren't included). Below I've included charts for client side, server side, aggregate as well as percentage of purchases that were "high value" (ie, exceeded the survey threshold).

Note that there are lots of opportunities for improvement in the numbers. First, some buyers buy more specialized bugs (ie, for products with limited market share). These bugs would go for less money due to the market impact and drive the vendor's numbers downwards as they would appear "on average" to pay less money when in fact they might pay normal or above average for the same bugs that others would. Ideally one would shop the same bug to multiple vendors and compare offers and do this for multiple bug classes to get a much better comparison.

Second, since the researchers reported the data anonymously and the survey was advertised to a limited group opportunities for bias exist there. They could be only reporting more interesting bugs, or disproportionately represent a specialty (Oracle products for example) that would skew the results.

From looking at the data it appears likely that the insufficient number of samples is biasing the "Direct" numbers too low. I took out a single low sample and that moved the average Direct price up to ~$9,400, putting Direct in first place. Given the percentage of Direct sales that exceed the "high value" threshold, I would argue that Direct sales are probably the highest on average but we don't have enough data to show exactly how much higher.

For all the (legitimate) complaints out of the NoMoreFreeBugs community and others it's great to see the market reacting and creating both financial incentives and information regarding the market for sellers and buyers. The survey is available here. I would encourage anyone interested in this area to pass the survey on to any researchers you know as increasing the statistical sampling will significantly improve the quality of the data available. Let me know if you want the numbers or more information about the analysis.

Also for further research on the topic including vendors, a good briefing by Pedram and some papers and other material on the topic see my post from September

Friday, May 14, 2010

Vehicle hacking

Researchers from UC San Diego and University of Washington have demonstrated the ability to compromise a modern automobile and assert control over critical functions of the vehicle using the government-mandated "On-Board Diagnostics (OBD-II) port. It is under the dash in virtually all modern vehicles and provides direct and standard access to internal automotive networks."

They implemented a CAN bus analyzer called CarShark and then fuzzed the protocols to find other attack vectors with significant results. 

The paper:

Experimental Security Analysis of a Modern Automobile (Or here)
K. Koscher, A. Czeskis, F. Roesner, S. Patel, T. Kohno, S. Checkoway, D. McCoy, B. Kantor, D. Anderson, H. Shacham, S. Savage. The IEEE Symposium on Security and Privacy, Oakland, CA, May 16-19, 2010.

They posted their Frequently Asked Questions on the paper.

While they used the OBD-II port (implying physical access was required) there are numerous wireless interfaces (such as CANRF) on board for entertainment, remote diagnostics and other features that interface with the on board network. Wireless interfaces include Bluetooth, Wifi, custom RF interfaces such as those for tire pressure, GM's Onstar system, one-way interfaces such as radio (particularly HD-radio) and Sirius and others.

Some web posts on the topic (from Stephen Northcutt):

Tuesday, February 9, 2010

Training

There are many specialties in computer security. In my experience there are few formal training programs that support the scientific/R&D/non-IA focused areas such as Reverse Engineering, intrusion detection, exploit development, algorithm development, etc. What programs exist are short, seminar-style focused classes (Most of the conferences such as Blackhat, Cansecwest, etc. offer these), buried inside of larger, less relevant academic programs or an IA-focused program such as those at SANS.

I've been asked not infrequently how to get the proper skills or where to send individuals to gain the requisite skills. While no program can replace intrinsic attributes such as curiosity, critical thinking, motivation, and others indispensable to a successful career, they certainly can help develop them and provide some domain knowledge. I've created this post to include some of the more interesting programs/courses/challenges/etc. related to advanced specialties and skills training that I've come across. The list with be US-centric but not exclusively. I will update this post as I come across new information, would appreciate suggestions from any readers out there.

First written February 9th, 2010:

Topics of Interest (not exhaustive):
Host Attack/Defense:
    - Linux/Windows/etc. kernel hacking
    - Rootkit implementation and detection
    - Architecture, containment & resource management
    - Forensics and Assessment of damage

Network systems
    - Network Tracing for attribution
    - Attack detection
   
Code analysis:
    - 0-day Vulnerability Discovery
    - Reverse Engineering of Binaries
    - Vulnerabilities and Exploits

Programs:
  • Cyber Security Awareness Week CTF challenge at NYU-Poly (Defcon-like Capture the Flag). Focused this year on Web Application security, Reversing and Exploitation
  • Penetration Testing and Vulnerability Analysis CS6573 course currently taught at the Polytechnic Institute of New York University
  • Blackhat Conference training and briefings.
  • REcon Reverse Engineering Conference. Very technical conference focused on advanced RE techniques. 
  • Other technical/hacker conferences: (Cansecwest, Shmoocon, Toorcon, etc.) Quality varies by individual conference but a lot of similarities
  • Big IT security-focused Training companies like SANS and INFOSEC Institute. Much of the material is not of interest (to me or other similar types) but there are some smart people teaching good classes, you just have to know where to look.
  • Consultant-led training from places like Immunity, Zynamics, Recurity, etc. Excellent courses from experts, but pretty expensive. Deep dive into a niche (Cisco RE, heap overflow exploitation) similar to the conference training but longer and more expensive. Can also be tailored to their audience or provided at a remote site.
  • Academic Centers across the country usually have courses (or even programs) that are pretty solid. A quick list to start with would be the NSA "Centers of Excellence" in IA program.  Focus on the ones with a CAE-R next to them. At 40 sites there is still a TON of chaff on there, but there are some good programs/people out there. CMU, FIT, and Purdue are some of the stronger programs out there, but honestly any rigorous program that emphasizes assembly, algorithms, advanced architectures, etc. would help providing fundamental skills. I tried going through experts I know to see if there were any schools that were represented with increased frequency, the only thread seemed to be technical programs (California/Indiana/Massachusetts/Worcester/etc. Institute of Technology) mixed in disproportionately among the other schools.
On the job training is always the best place to learn and in this arena that is particularly true. Reading about exploits doesn't a lot until you've written C/assembly code and messed around with registers. I'm confident that there are numerous other programs out there I haven't listed, would love feedback below or offline on email/twitter/etc. regarding excellent programs that other people have discovered. The next step would be to create a table breaking out and grading the content from each program... don't have that much time however.

Friday, January 29, 2010

Information Markets

Vulnerability data is a subset of a broader market in information. There's a great company called Intrade running a full market exchange based on boolean future facts (such and such will or will not happen). They were accurately predicting the Scott Brown victory well before it happened. Not a perfect system, but an interesting way to quantify the estimates being made and associated confidence metrics. You can email them to suggest new markets. Would be interesting to see the community suggest some 0-day related topics to price. The graphic shown lists the market price that the Higgs Boson Particle will be observed on/before 31 Dec 2010.

Also read an interesting article from ABC News using the recent Google compromise as an excuse to discuss the vulnerability market. Some of the more memorable quotes:
"Likely, they merely had to tap a thriving underground market, where a hole "wide enough to drive a truck through" can command hundreds of thousands of dollars, said Ken Silva, chief technology officer of VeriSign Inc. Such flaws can take months of full-time hacking to find." Zero days are the safest for attackers to use, but they're also the hardest to find," Silva said. "If it's not a zero day, it's not valuable at all.""
"Pedram Amini, manager of the Zero Day Initiative at the security firm TippingPoint, estimated that the IE flaw could have fetched as much as $40,000. He said even more valuable zero-day flaws are ones that can infect computers without any action on the users' part."
In this case, Microsoft actually knew about the flaw since September but hadn't planned to fix it until February, as companies sometimes prioritize fixing other problems and wait on the ones they haven't seen it used in attacks.
There's also another, highly secretive market for zero days: U.S. and other government agencies, which vie with criminals to offer the most money for the best vulnerabilities to improve their military and intelligence capabilities and shore up their defenses.
TippingPoint's Amini said he has heard of governments offering as high as $1 million for a single vulnerability — a price tag that private industry currently doesn't match.
Little is publicly known about such efforts, and the U.S. government typically makes deals through contractors, Amini said. Several U.S. government agencies contacted by The Associated Press did not respond to requests for comment.
One researcher who has been open about his experience is Charlie Miller, a former National Security Agency analyst who now works in the private sector with Independent Security Evaluators. Miller netted $50,000 from an unspecified U.S. government contractor for a bug he found in a version of the Linux operating system.
I had to chuckle at the line "Several U.S. government agencies contacted by The Associated Press did not respond to requests for comment." Go figure. My blog post on this topic with link's to Pedram and Charlie's papers as well as some companies that advertise their work in the domain is here.

Also of interest  is Google's announcement that they will be copying Mozilla in paying for vulnerabilities reported to them privately.  With Chrome and Firefox both monetizing this information (at arbitrary, as opposed to market prices) it remains to be seen how long Microsoft will hold out refusing to pay for third party research.

Thursday, January 28, 2010

Hacking embedded systems - March update

Big news recently was the exploit against the PS3 hypervisor developed by George Hotz. Nate Lawson has a good writeup explaining the attach on his blog. He fills a section of memory with duplicate pointers to a buffer of memory that he controls. He then deallocates the section of memory with the duplicate pointers but interrupts the system in hardware before it completes the deallocation. Thus the hypervisor now has memory pointing to a buffer controlled by the Linux kernel, which is under the attacker's control. The attacker then creates virtual memory buffers until the Hypervisor creates one that overlaps the section that is controlled by the attacker. Once this is complete, the magic completes when the exploit creates:

HTAB entries that will give it full access to the main segment, which maps all of memory. Once the hypervisor switches to this virtual segment, the attacker now controls all of memory and thus the hypervisor itself. The exploit installs two syscalls that give direct read/write access to any memory address, then returns back to the kernel.

The attack requires the attacker to run a timed voltage in the nanoseconds on a particular line (shown by the red circle on the graphic above) on the PS3 memory bus to confuse the system and interrupt the memory deallocation. George has not compromised the secret keys, and much work remains. But, attackers can now access all of the hypervisor code and should be able to operate in memory outside of the hypervisor on the main Cell processor (PPE). There are seven other Cell (SPE) coprocessors, including one dedicated to security functions.

It's a testament to the level of security engineering put in by Sony that it's lasted this long. Their willingness to allow dual booting Linux potentially subverted intense analysis, since some of the objectives in hacking the system were eliminated.

On a personal note George Hotz is developing an impressive track record. He is one of the key developers behind the iPhone/Ipod Touch hacks and released the primary tool for "jailbreaking" those systems. For a 21-year old he has a bright future in the field...

UPDATE (March 29th, 2010):
Sony has responded to George's research by announcing that on April 1st they will be disabling the "Other OS" feature on all deployed Sony PS3s. Since this was a feature advertised when they sold the devices, some users are speculating that Sony will be sued for retroactively removing a feature that many people paid for. George is being blamed by some users and is planning to create a workaround. Interesting unintended consequence and heavy handed response by Sony.