Thursday, May 26, 2016

From public sector to private sector: A view from the trenches.

(An abridged version of this post appeared in the CipherBrief on May 15th, 2016) 

In 2009 I left a job at the Defense Advanced Research Projects Agency and started Siege Technologies. My goal was to fill the vacuum of small, innovative companies building advanced, disruptive technical solutions in offensive and defensive cyber warfare left by recent large corporate acquisitions.  The last day at DARPA I signed paperwork removing all the accesses I had received during my time there with DARPA and our numerous partners. They took my green badge, CaC card, DARPA badge, and computer. I felt a little like George Banks in Mary Poppins when the bank fires him and proceeds to destroy his umbrella and poke a hole in his hat as part of the discharge process.  I founded Siege Technologies two weeks later and slowly collected most of those resources again over time. The experience was extremely informative and provided some important lessons for anyone contemplating a move into private industry from government or into a startup from a large company.

Advantages of government experience

There are some powerful advantages that time in government provide someone making the plunge into entrepreneurship. The biggest is a perspective on what’s going on at a national or even global level. Insight into the hard problems, operational challenges and thought leaders are invaluable takeaways from government service. Additionally the friends and contacts created throughout government, industry and academia can provide valuable assistance down the road. Having worked as a contractor, government employee and corporate employee again there’s a big difference walking into your favorite government agency with a “blue badge” versus a “green badge”. Having a government badge causes government people to assign moral characteristics to you that are significantly different than the negative assumptions pinned on contractors sadly. And strangely these positive views follow you out into corporate America. Even though I was the same person throughout the experience there is a significant difference in how the people you meet while wearing the government badge perceive you, during and after government service.

Starting from scratch is hard

It is not easy to take a blank piece of paper and write a novel. Starting a company is similar, as building something from nothing requires the ability to see a future that does not yet exist, and execute to make that vision a reality. Taking a small firm and helping it break out of a small business mindset to reach its potential is equally hard (and maybe harder in some ways) because you need to reshape structures that may have hardened and take on risk that may have been previously discarded or avoided. The technical team, technology, access to customers and partners, cash, and information are never as robust as you would like and are often in a state of flux. A challenge unique to moving to a startup from government is the gossip mill of other disgruntled government/commercial individuals who allege stolen ideas, inside access, or other improprieties as the real drivers of success. Changing the mindset of the brave souls who move from the comfort of government to the excitement of a startup is imperative, as there is no checklist of procedures or higher authority to consult before getting things done. Sitting at your desk or attending meetings are not going to get a product built or customers signed up, startups are an exercise in energy exertion. I vividly remember talking to my wife in December of 2009 about whether we would have a paycheck before Christmas and estimating how many days until our final credit line was maxed. Getting my first Siege paycheck on Christmas Eve was the best Christmas Eve gift I’ve received! As Benjamin Franklin said, “Nothing ventured, nothing gained”.

Smaller is riskier

There is a big difference between a job in the government, a job at a big business and a leadership position in a startup. The government has a difficult job ever firing anyone or laying people off, although it does happen in rare occasions. Big business doesn’t usually fire people and layoffs are usually focused on culling the weaker ranked employees (although entire segments of the business can be felled in a single swipe!) And while small companies engage in layoffs and firing, they introduce a new variable into the equation: Cash. In business they say “cash is king” because without it, a business cannot conduct operations. Starting a company involves working for free, reduced pay, gaps in funding, contributing money, and wondering how to make payroll. Borrowing money from friends, banks, and signing numerous contracts as the guarantor. Even well funded VC-backed firms have to worry about cash throughout the process and keeps track of the “going out of business” point when your burn rate chews up the cash in the bank.

Smaller is faster

Making decisions in a small company is easy. The individual makes a decision and moves out. Sometimes there are managers or stakeholders to consult, but the reporting chain is much smaller and stakeholders to consult much fewer. The ability to make decisions quickly allows companies to react to changing market dynamics and technology much more quickly than larger firms competing in the same space. A great example of this is purchasing. When I worked at a large defense contractor, in the 1990's I needed to get a copy of “PC Anywhere”. Weeks went buy until I heard it was authorized. Weeks turned into months and I reached out to find where it was to discover the acquisition system had lost my order. When I explained what I needed I was assured it would be coming soon. A week or two later a different product (PC-Xware) arrived! Contrast that with a small firm with a flat management chain… if someone needs something at a small firm they ask their manager and it gets ordered on a corporate card within a day or two.

Smaller is more innovative

It’s easy to understand why small companies move faster, but where does the phrase “small companies innovate, big companies integrate” exist? Innovation is a complex topic which numerous books have been written about to describe. I believe there are a number of factors behind the wave of innovation coming from small firms:

  • Ability to attract and retain top talent. Employees like to work in nimble, more fun, better paying environments!
  • Emphasis placed on innovation. Small companies are taking on larger, often entrenched competitors and creating something new is often imperative to survival.
  • A culture that values disruption over the status quo. Big companies don’t change quickly while growth-oriented small companies are focused on how to change the game and become a big company!
  • Quicker access to resources and decision making. The lack of process and large management chains enable individuals to go and quickly buy/hire/talk/build whatever they need to do as part of their mission to get the job done, while larger organizations utilize processes to limit risk. 

Building a company is rewarding

Taking a company from nothing or small into something large enough to have some “punching power” is extremely satisfying. It means the market recognizes that you are offering something of value. That people are joining your endeavor to make a difference. The resources you accumulate as you grow mean some of the concerns from earlier days are mitigated and new opportunities begin to present themselves. A new era of entrepreneurs are rising up who are increasingly availing themselves of the opportunity to inject a conscience into their work and engage in social causes through their corporate position, products, and with the resources created by the firm. My wife and I have committed to giving the bulk our gains from Siege some day to charitable causes and view the firm as an opportunity to have a positive impact at a scale unachievable as individual contributors to those causes. Firms like Newman’s own give away their profit to philanthropic causes, and numerous clothing/jewelry/coffee businesses integrating a social cause into their corporate mission and value statement. In fact the percentage of corporate giving is inversely correlated with size, with the smallest firms giving the most generously[1],[2]

Perspectives on the cyber security startup market

The cyber security startup market has been hot. On fire is probably more accurate. The graph below shows how investment has been ramping up over the last seven years (I started Siege at the relative low point of 2009, apparently not a good year from investors perspective!)

Figure 1 Millions of Dollars invested in Cybersecurity Companies.
Spending on cybersecurity in 2015 exceeded $75 billion according to Gartner[3]. The market is over $100 billion according to Market and Markets and will grow to $170 billion (USD) by 2020, at a Compound Annual Growth Rate (CAGR) of 9.8 percent from 2015 to 2020[4]. The cyber security insurance market is expecting significant growth and should reach $7.5 billion in annual sales by 2020, up from $2.5 billion this year[5].

But in 2015 signs were showing that the valuations and dollars heading to cybersecurity companies had begun to cool. Specifically, “some are predicting a measured slow-down leaving a slew of Seed/Series A funded companies without a Series B sponsor”[6]. Median security EV/revenue multiples have declined from 5.5x in 2013, to 5x in 2014 and 4.5x in 20154.

That said the problems still remain. Enterprises large and small, government agencies and individuals are still being targeted and compromised with increasing frequency. 2015 alone saw a reported jump of 48% in compromises that were reported, and successful detected attacks have been rising at a compounded annual growth rate of 66% year over year since 2009[7]. The annual cost of these attacks range from hundreds of billions to trillions depending on your estimation methodology and sources (considering theft of IP versus just cleanup, for example). Nobody has built the silver bullet solution to solve the problem and significant opportunities exist if entrepreneurs are really providing new solutions to the problems that exist and loom over the horizon in the form of technologies or services.

Perspectives on transitioning government-funded technology

At Siege we have a number of technologies that we have developed with external funds, spanning areas as diverse as cyber quantification to custom hypervisors to software protection and software vulnerability remediation. Some were developed entirely with government funds, some with almost exclusively internal or commercial funds and most with a hybrid. Taking these capabilities from the lab to product is not easy. Numerous hurdles must be addressed, from classification to export control to publication restrictions to the myriad of intellectual property rights issues. And that’s before you address the “valley of death” that exists between research and products. An article in IEEE captures this challenge well, saying “New and innovative technologies will only make a difference if they're deployed and used. It doesn't matter how visionary a technology is unless it meets user needs and requirements and is available as a product via user-acceptable channels.  One of the cybersecurity research community's biggest ongoing challenges is transitioning technology into commercial or open source products available in the marketplace[8] and that reflects my personal experience working in research and innovation at big companies, DARPA and now a smaller firm. 

Inventors are often beholden to their creations and believe it possesses more value than they often do. There is usually a gap between the requirements targeted during development and what the market needs. And there is funding required to get the product from where it is currently to where it needs to be. Inertia fights against changing anything and turning this technology into a product, but the fight can be well worth it if the numerous obstacles are addressed with vigor head on. It is a fight that must be won in order to “change the game” and make a difference instead of allowing the solutions to important national and global problems to die an inglorious death in the lab.


It is impossible to affect change without taking risk. Change necessitates overcoming resistance and various obstacles to achieve a necessary goal. Starting or joining a new venture provides the opportunity to affect significant change at personal, technological, national and societal levels if success is achieved. But even if failure is an outcome, lessons are learned and character is formed through that process. The average successful entrepreneur has several failures in his or her belt (I had two false starts) and is middle aged with the median age entrepreneurs started their companies being 40[9].  Teddy Roosevelt captures the opportunity well with his famous quote: “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.[10]

[1] CEO Force For Good, “Giving in Numbers 10TH ANNIVERSARY 2015 EDITION”, September 2015.
[5] PwC, “Insurance 2020 & beyond: Reaping the dividends of cyber resilience”, September 2015
[6] Momentum Partners, “Cybersecurity Market Review 4Q 2015 Year End”, January 2016
[8] Maughan, D., Balenson, D., Lindqvist, U., & Tudor, Z. (2013). Crossing the Valley of Death: Transitioning Cybersecurity Research into Practice. IEEE Security & Privacy, 11(2), 14-23.
[9] Ewing Marion Kauffman Foundation, “The Anatomy of an Entrepreneur”, August 2009.
[10] Theodore Roosevelt, Excerpt from the speech "Citizenship In A Republic" delivered at the Sorbonne, in Paris, France on 23 April, 1910.

Wednesday, November 12, 2014

Side channel attacks

Interesting paper came out late 2013 describing a method to use audio emanations from a CPU to determine the private key.

Since the 1990's work has gone on using timing or power analysis to accomplish the same thing (deduce secret keys). Paul Kocher pioneered much of this work, including timing attacks against RSA (paper here). Multiple attacks against RSA have used power attacks with success. There are multiple defenses against timing and power attacks, including filtering emanations, smoothing activity (adding noise), blocking the ability for someone to sense data, etc. with varying degrees of success.

The recent work can be viewed as a derivative of that prior work. But instead of measuring time between actions, or power surges directly it's using acoustic emanations to derive the same information.

Of course, the field of side channel attacks on systems is old and interesting. Some classics:
  • Tempest-style attacks intercepting video broadcasts from outside the building since the 1980's.
  • Optical tempest, where the authors analyzed the activity light on various systems and constructed a system to intercept the light from across the street of an office building and recreate a serial data stream (Pre-published version here, ACM version here.)
  • Creative attack described in 2007 to use the microphone on your system to drive input to a speech parsing engine (such as Windows Speech Recognition in Vista). MS downplayed it of course but it highlights an interesting attack vector.
  • George Hotz's PS3 hack, where he used an FPGA board to disrupt the memory bus on the PS3 and cause instruction flow to jump into regions of memory that he controlled.
  • I discussed using speakers for covert channels in an earlier post.
Another interesting side channel technique came out in 2014 from researchers at Ben Gurion university. They showed that you can use FM receivers in mobile phones to collect specially encoded data from nearby video displays to create a cooperative TEMPEST exfiltration channel. Not really an attack per se, as it involves cooperative systems but it's certainly useful to enable broader attacks. (Just like ASLR bypasses aren't attacks per se, they are information leaks that can be utilized to enable complex attacks/exploits.) Also not new, as it's building on the Tempest work from before but doing it from a cell phone is novel.

Using RFID to access systems or propagate code has been discussed since at least 2006. Vulnerabilities in optical character recognition systems (which take pictures, and analyze them in an attempt to convert into digitally represented text) were published in 2007.  Attacks using QR codes were deployed in the wild in 2012.

Those attacks rely on analog systems that are looking for digital input in the analog medium provided by an adversary. Denial of service attacks that are purely analog (such as pointing a light at a camera, or EMP disables the function of systems quite nicely) have been well documented. But what about hacking a passive sensor such as a wireless IDS? (there are hundreds of vulnerabilities in just two popular passive, inline sensors: Wireshark (285, 22 enable RCE) and Snort. (10, 2 enable RCE)) And what would you call it if you took advantage of a feature extractor (such as a facial or gait recognition engine in a camera) to crash or even exploit a device? 

It's my opinion that as computing devices become more ubiquitous and embedded in everything you'll see these types of attacks in more and more interesting locations (Police car license plate scanners anyone? Border security systems. NFC is getting owned all over the place lately. The list goes on). Attacks will move beyond information leaks and disruption to include remote access via non-anticipated "side channels" or subsystems that people don't realize create risk. (Your Antivirus software, your networked coffee pot, your tire pressure monitors!)

Tuesday, November 12, 2013

#badBIOS and Nefarious / Advanced Malware

Screen shot of possible high frequency audio channel in badBIOS
"badBIOS" is a name given to a suspected attack that had been going on for several years against systems owned by Dragos Ruiu. He posted on it on Twitter (@dragosr) using the hashtag #badBIOS and Google+. The story gained momentum when Ars Technica did an excited writeup about it. I'm going to try to summarize the nearly magical properties that it is believed/suspected to possess with references (herehere, here) but I apologize if I confuse the claims/rumor/possibilities:
  • It infects OpenBSD, Linux, Mac and Windows systems.
  • It infects the BIOS (UEFI and others).
  • Even if the BIOS has been reflashed, it persists through reboots.
    • Dragos posited it is due to video or network card firmware modifications
  • It utilizes IPv6 even if that's disabled in the network stack.
  • It loads a hypervisor
  • It transfers via USB and other mechanisms.
  • It "reacts and attacks the software that we're using to attack it". For example, the registry editor stopped functioning to prevent them from performing forensics analysis.
  • It communicates via high frequency audio sent through the computer microphones and speakers.
  • It can hide itself in Windows font files and deletes them if inspected. 
From the Ars interview:
"We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD," Ruiu said. "At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we're using to attack it? This is an air-gapped machine and all of a sudden the search function in the registry editor stopped working when we were using it to search for their keys."
The argument being that if it is not connected via the network (Bluetooth, Wifi and Ethernet were all removed/unplugged) and a USB drive wasn't used to reinfect the system, how could it have been infected despite a reflashed BIOS and new hard drive? 
But the story gets stranger still. In posts herehere, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: "badBIOS," as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps.
That summarizes the major posited properties of the malware. With such powerful, never before seen, complex properties posited, Dragos has encountered some skepticism from (normally skeptical) security/IT community. I won't highlight them all, but there are plenty on Twitter, the Blogosphere (here, here, etc.), etc. Even Ars posted a follow up article to give attention to the amount of skepticism. badBIOS already has its own satirical Twitter account.  Renowned researcher Tavis Ormandy went through the font files and disk images and concluded that there was nothing suspicious there and Dragos should just ignore it and relax. [Turned out to be good advice.]

The major concerns seem to revolve around the following points:
  1. Where is the evidence? (Both the lack of available data, and nothing in the data provided)
  2. Why has this been going on for three years and just now being exposed?
  3. Why would someone combine so many novel attacks into one network/attack against Dragos?
  4. Can you even build a set of code that is portable against so many firmware/hardware/OS configurations? In a bandwidth constrained environment? 
There have been multiple people supporting Dragos, with Tweets from known members of the community (like Alex Stamos or Jeff Moss), blogs (here and here), or even news pieces
There are viable counter arguments for the doubters:
  1. Dragos has been providing some disk images, spectral analysis of the audio and other forensics data sources for analysis (although mostly to private, often unnamed sources). 
  2. It is possible that the code has been growing in complexity over time. And Dragos wasn't aware of the issue until later on.
  3. Dragos runs the Pwn2Own competition at CansecWest. Between that and his normal work (which presumably involved enough 0-day research to qualify him to start such a contest in the first place) might make him an interesting target for someone trying to acquire 0-days. 
Interestingly, almost nobody seems to doubt that the individual components are not possible. Now that I've summarized how we got here and what's been seen to date, I'm going to add some thoughts. 

First, there were a LOT of skeptics when Stuxnet came out. I was one of the early people (late September 2011) who embraced Ralph Langer's hypothesis (seemed like the most obvious solution given all the evidence.) There were people speculating that the analysis was flawed, it was really a ruse by the Russians or Chinese, etc.) Turned out that the analysis was fine and the nefarious/advanced malware option, was in fact, the correct conclusion. There are lots of compelling research demonstration in each of the areas postulated to date, the only really novel thing here (so far) would be the fact that they are all combined into one VERY complex piece(s) of code:
  • Researchers at Siege Technologies and academics in Germany have demonstrated covert channels are possible over audio channels. 
  • BIOS infections can provide persistence and are definitely not new. They just keep getting better over time.
  • Proof of concept infections/reprogramming of Network cards (here and here) and video cards have already been developed (and now people are publishing papers on how to catch them). One aspect of such low level attacks is they are impervious to disk replacement or BIOS reflashing and don't care about the version of the operating system. 
  • Hypervisor attacks have been around for years.
  • Ipv6 is just a standard network protocol. Even if it is "disabled" you could still utilize the code on the host system.
  • USB sticks have been a well known attack vector for years. In 2005 researchers at Blackhat showed that you could exploit the operating system USB drivers when plugging in the device. This was also shown more recently in 2014, where it had been improved to hide on USB firmware.
  • Malware has been sensing/reacting to evade detection for years. 
  • Multiple platforms can be handled many ways. One would be code residing on the BIOS or peripheral devices (NIC/Graphics/etc) as discussed in bullets above. Another would be motherboard/processor components such as the AMTmanageability engine
  • For storage, people have used hard drives for ages. Given they were removed here, other approaches must be considered. Obviously if components in hardware were reflashed (as described in the research papers above) that would provide persistence. Other research has shown that NAND regions marked as unusable on disks could be utilized (of course, that would most likely be on the hard drive removed, but it could theoretically extend to boot flash for other components). Others have discussed since 2006 modification of the processor itself, by exploiting the ability of the processor to upgrade the microcode. (Of course that's difficult to do given the cryptographic signature constraints.) McAffee filed for a patent in 2011 to put security in at the microcode level.
So to summarize:
  • The lack of third party confirmation means that probably everything that is "suspected" isn't "actual". The very definition of suspected means that confirmation is missing. 
  • Nothing that has been suggested as a possibility is theoretically new, although the practical deployment of a robust tool might be novel. 
  • Certainly the integration of all of those capabilities would be very novel. The combination of even 3/4ths (or maybe even half!) of the alleged capabilities would put it on par or ahead of Stuxnet
  • Knowledge of capabilities and threats can certainly induce paranoia, especially in a field that advocates it as a desirable property
Personally, I think it's likely that there have been a few nefarious things on the network, some of which are gone. As a result of that absence, significantly advanced properties are suspected instead of assuming that the attack is transient. I remember significant challenges I had trouble shooting a random hard crash my system was experience. A mistake in malware that was exploiting hardware was definitely one of my concerns... but nothing I did could identify a problem. Turned out after I turned for outside help it was temperature, the fans were going and it was simply overheating.

Seems obvious now but the complete absence from logs, random behavior, persistence despite testing and replacement of hardware had led me to some interesting possibilities that were theoretically possible but unlikely. Might be the same thing going on here. [Update: Turns out that's exactly what it was..., Dragos came out and said he was incorrect. Looks like he was just overly paranoid and hadn't spent enough time looking at all the weird OS things that happen under the hood. His knowledge led him to unlikely but possible nefarious causes, instead of a simpler answer.]

It's really hard to do forensics when you don't have a position of trust. When you don't know what's good or bad. And when those beliefs keep getting disrupted because you don't have consistent data/records. And doing complex analysis in isolation is a bad idea, crowdsourcing is a great approach to this sort of problem (with data provided of course, everyone was crowdsourcing opinions!)

It's also been interesting to see the community awaken to the possibilities of these academic, proof of concept types of attacks existing in the wild. Much like the snarky reactions to Stuxnet, most don't believe these would ever occur in the "real world". But most of the techniques discussed in this post and around badBIOS date back to mid 2000's and probably even earlier in less obvious forums (obscure blogs, email lists, IRC, etc.) There's nothing new under the sun, and yesterday's research will be today's proof of concept... and tomorrow's operational code.

[November 2014: Updated to include Dragos saying he was wrong, just overly paranoid, #badBIOS USB firmware publications, and MITRE's BIOS implant work]

Monday, February 11, 2013

Cyber-espionage tool - Gauss

[Note: This post was written in August 2012 but I never finished it to my satisfaction so it didn't get posted. Posting it here now because I loved this font signaling trick and wanted to write about. One advantage of posting 6 months later is I can report on what they found after 6 months of analysis, see below.]

Kaspersky has been spearheading a rash of discovery and analysis of advanced cyber-espionage tools that they (and others) are attributing to "nation-states". Stuxnet broke ground in 2011 and eventually even the hardened skeptics admitted it was state sponsored... then came Flame, Duqu and now Gauss this summer.

I didn't write as multiple people have covered these topics at length, I'm pretty confident things are nearing saturation when my wife mentions them to me. But a couple of things are interesting. First, it seems that either nation states are getting more active in this space, or AV companies are getting better at detecting them. I'm curious which it is. Second, Gauss demonstrated that the authors learned from at least some of the mistakes that Stuxnet made. Particularly of interest to me was their use of an encrypted payload that was keyed to the system configuration and not reversible. (Unlike Stuxnet, which had a child-like "if PCI device address = xyz, than decrypt) approach. I'd considered this possibility 6 years ago when learning about ABE (Attribute Based Encryption), which enables the implementor to use attributes as part of the key in a one way function. In the case of Gauss, they simply hashed the %PATH%” environment string and the name of the directory in %PROGRAMFILES% so that analysts don't know what variables are necessary to unlock the encrypted payload.

Another interesting feature of Gauss is its installation of a custom font, called Palida narrow.

Kaspersky had no idea why it was installed. But the researchers at Crysys have some good hypotheses:

One possibility is that there are other components using Palida for some reasons. E.g., tricking with some characters on web pages to hide alerts, or similar, not really clear operations.
A very far-fetched idea is that Gauss uses the font for printed material. It actually tricks some parts of the system to substitute fonts with Palida, so any prints will contain Palida. Later, printed documents could be identified by looking on the tiny specialities of the font.
A third, and more probable idea is that Palida installation can be in fact detected remotely by web servers, thus the Palida installation is a marker to identify infected computers that visit some specially crafted web pages.

They go on to document how web developers could use CSS style pages to determine if a font is installed on a system or not. If the browser discovers it doesn't have the font it can be directed to a URL to retrieve the proper font file. By hosting this on a site controlled by the attacker they can determine if a given system has Gauss installed. A writeup with code is provided on the Crysys page.

Another possibility is the font is inserted to create a vulnerability that provides a backdoor into the system. Fonts have been used in attacks in the past, this could just be another opportunity for future access. More specifically, the TrueType font DLL was exploited by Duqu, which is alleged to be developed by the same people that developed Gauss due to their architectural similarities.

[Feb, 2013] The Wired article I linked to describing Gauss says that both Kaspersky and Crysys believe that signaling was the intent and I agree that is clearly most likely. Given the targeted, sensitive nature of the attack and the limited number of machines it was on (and lessons learned from Stuxnet landing all sorts of unintended locations) and the fact nobody has identified (or reported at least) a vulnerability resulting from the Palida vulnerability signaling just makes sense. Easy to check, subtle, and useful.

As of August 15th the Internet traffic on Gauss drops significantly and people were recognizing they had a serious, unsolved mystery on their hands and were setting out to crack it. An article on ZDNet in September points out it still hasn't been cracked. In December they posted about a cracking tool trying to target the MD5 hash used to protect the payload decryption targeting/fingerprinting module. (Which incidentally runs MD5 10,000 times... not surprising it hasn't been broken yet!)

February 5, 2013 the hack cracking tool was updated to a new version (see history here) and there was no information indicating anything other than a complete stonewall. (They still haven't cracked the encrypted payload or identified what the font is used for).

[May 7, 2013] The Infosec Institute has a nice writeup on Gauss (I found it as they reference this blog post) that covers some aspects I didn't describe.

Monday, March 26, 2012

Air Force Electronic Attack and Cyber

Good article on Aviation Space and Week a few days ago I had to share. Not surprisingly, it was written by David Fulghum, who wrote several other articles in the past I've referenced in the IW area. He does a great job finding interesting, unclassified stuff to write about in the DoD and IO/IW/EW community activities, although it is not always easy to substantiate.

The article quotes several senior AF executives describing aircraft-oriented attack technologies by the USAF and other countries (namely China and Russia). I'll quote them below:
The Air Force is pursuing “cyber-methods to defeat aircraft,” Gen. Norton Schwartz, the service’s chief of staff, told attendees at the 2012 Credit Suisse and McAleese Associates Defense Programs conference in Washington March 8. But Lt. Gen. Herbert Carlisle, the deputy chief of staff for operations, says the same threat to U.S. aircraft already is “out there.”
Ashton Carter, deputy secretary of defense, is pushing both offensive and defensive network-attack skills and technology. “I’m not remotely satisfied” with the Pentagon’s cyber-capabilities, Carter says.
“The Russians and the Chinese have designed specific electronic warfare platforms to go after all our high-value assets,” Carlisle says. “Electronic attack can be the method of penetrating a system to implant viruses. You’ve got to find a way into the workings of that [target] system, and generally that’s through some sort of emitted signal.”
The Chinese have electronic attack means — both ground-based and aircraft-mounted — specifically designed to attack E-3 AWACS, E-8 Joint Stars and P-8 maritime patrol aircraft, he says.
Interesting comments. First, if they are really interested in "cyber methods to defeat aircraft". Second, that he would think stating that goal at the Credit Suisse and co. conference was a good idea. Third, that Ash Carter's not "remotely satisfied" with our cyber capabilities. And fourth, that Herbert Carlisle claims the Russians and Chinese have already designed platforms to attack "all our high value assets".

The article goes on to rehash earlier claims regarding USAF airborne attack capabilities. Wikipedia summarizes those using the three previously mentioned articles from Aviation Space and Week, and two others here. There are two even more detailed articles on the topic, mostly expanding the events in Syria in Air Force Technology that I'd not seen before. You can find part one here and two here.  

While reading Fulghum's article I also read a couple of new ones he wrote on NGJ, including a focus on autonomous platforms and info on weapons/AESA radars. I updated my Navy Airborne Electronic Attack post accordingly.

It all reminds me of that saying, "May you live in interesting times." I'd say that's accurate and only accelerating!

Thursday, March 15, 2012

Army Cyberwarfare R&D

Just ran across this interesting article from August of 2011 with Georgio Bertoli, the Army's I2WD Offensive Information Operations Branch Chief. Some highlights:
There are few specifics Bertoli can provide about his work because so much of it is classified. But the primary goal of cyber warfare, he explains, is to provide warfighters with a non-kinetic means of striking enemies without permanently destroying infrastructure. The second goal is to disrupt, deny and degrade enemy operations and prevent them from strategizing and communicating.
His team, which consists of 20 government engineers and support contractors, uses software-defined radio, electronic warfare, signals intelligence and other technologies to help build what the Army refers to as its future force.
"Just like a handgun versus a Howitzer," he says, "there's a whole spectrum of tools."
 To give an example of some of those approaches, here's a good presentation he gave at the C4ISR conference that's worth a review. In it, he highlights the differences between CNO (Computer Network Operations) and EW (Electronic Warfare) and the pros and cons of each.

Some other comments from the article:
Unlike kinetic warfare, in which one weapon potentially can thwart multiple enemies — "a bullet is a bullet," Bertoli notes — cyber-warfare typically requires a family of tools. For instance, what works on one particular waveform or network may not work on another.
"So now you have this huge toolbox. How do you manage that? How do you train somebody to be proficient in them?" Bertoli asks. It would be akin to teaching soldiers to use a different gun for each enemy. His team at CERDEC is working to create a common look and feel for cyber tools so they're easy to learn, and to develop a common framework so developers don't have to start from scratch with each weapon.
That reminded me of a solicitation hit the Internet that his group put out that solicited technologies from industry back in 2009. I went online to see what they were asking industry to provide for ideas and found as of Feb 2012 it's the same BAA from 2009. The document is available on the Army site here, and has lots of fun stuff for all the hackers out there. I won't include all of it for brevity, but here's what is listed under Computer Network Operations:
CNE and CNA support shall include but not be limited to:
    • Network discovery and mapping tools capable of operating in a relatively low bandwidth tactical environment and avoid or circumvent network/host-based IDS 
    • Destroy, disrupt, deny, deceive, degrade, delay, target, neutralize, or influence threat information system networks and their components, and Threat C4-ISR systems and nodes and other battlefield communications and non-communications systems
    • Understand various types of tactics, technologies, and tools used to perform CNO.
    • Vulnerability identifications and testing of both wired and wireless networks 
    • Techniques that can be used to find and route communications data through predefined path (accessible route) or to a particular location (cooperative nodes)
    • Methods for performing both distributed and coordinated CNO missions
    • Non-Access dependent CNO technique R&D 
    • Identification, capture and manipulation techniques for data in transit. 
    • Stealthy, real time, precise (within one meter) geographic location and mapping of Threat/adversary logical networks and their components. This includes, but is not limited to the following:
    Ø Individual work stations, terminals, and/or PCs, either networked or stand alone
    Ø Computer networks of any scale (both wired and wireless)
    Ø Virtual Private Networks (VPNs) (both wired and wireless)
    Ø Computer network components (local and/or backbone)
    Ø Displays
    Ø PCS and other commercially available wireless device types
    Ø Government owned or managed private communications networks (military or non-military)
    Ø Trunked Mobile systems or other networked commercially available communications systems
    Ø Telecommunications equipment (e.g., Private Branch Exchange (PBXs), corded and cordless phones)
    Ø Cryptographic components
    Ø Other peripheral components
    • Stealthy, non-cooperative access to logical networks and their components, that overcome threat/adversary best attempts to protect such networks and components. Proposals submitted under this sub-topic shall specify both hardware and software protection measures forming the basis of the target network environment
    • Stealthy, non-cooperative access to RF devices, communications networks and their network components, non-communications networks and their components, and other RF-centric networks and their components, to develop revolutionary TTPs that overcome threat/ adversary best attempts to protect such networks and components. Proposals submitted under this sub-topic shall specify both the hardware and software protection measures forming the basis of the target network environment
    • Stealthy, non-cooperative network discovery software tools, countermeasure capabilities and TTPs that overcome threat/adversary best information assurance/protect measures. Proposals submitted under this sub-topic shall specify both hardware and software protection measures forming the basis of the target network environment
    • Stealthy, non-cooperative network characterization tools and TTPs that overcome threat/adversary best information assurance and protection measures. Proposals submitted under this sub-topic shall specify both hardware, software, and protocol or transmission protection measures forming the basis of the target network environment
    • Stealthy logical network exploitation and/or countermeasure software schemes and TTPs capable of surgically inserting intelligent software agents into threat/ adversary logical networks, regardless of protocols in use or available
    • Stealthy intelligent software agents and TTPs for exploitation and countermeasures of threat/adversary logical networks, and other network-centric networks and their components, and/or Command and Control networks and their components.
    • Stealthy component mapping of logical networks and location data correlation and deconfliction with other all-source intelligence data 
                TTP is Tactics, Techniques, and Procedures for the uninitiated. They also have sections talking about their interest in a CNO framework, software agents,  and EW/IW techniques.

                If anyone has ideas in those areas they have submission information on their acquisition page. Not anywhere near as user-friendly as DARPA's Cyber Fast Track (CFT), and I'm confident they won't be as quick either. It's not been as well advertised though, so I'm sure they'd love to hear from some innovative people out there interested in building cyber tools. Sounds like fun!

                Thursday, February 16, 2012

                0-days and cowboys

                (I post most of the stuff I see on Twitter now, it's such a seamless way to share information. But I just wrote a long post and thought this article was funny/worth mentioning)

                In February 2012, Chris Soghoian called for "reining in" the 0-day researchers and adding regulations or other mechanisms to prevent people from buying/selling "weaponized exploits". He also calls people cowboys and a "ticking bomb" which I think is a bit FUD-oriented. His basic theme that there's a large, opaque market that could go wrong some day is generally a legitimate point (I was surprised how fast/loose people could be there) but I'm not sure how on earth legal restrictions would be constructed to do that effectively. The biggest problem out there now is the lack of transparency and trust between buyers and sellers... if it was brought to light buyers like Google and Facebook could continue to improve their products, commercial vendors can get what they are looking for and researchers could be paid for their work. Hard to picture some senator effectively putting that into legislation or some regulation...

                Some questions that come to mind:
                • Who would define what an exploit is? Does it matter if it's "weaponized" or not? What, exactly, is he proposing to ban/regulate?
                • Who defines what is legitimate or not? If the FBI wanted to buy one to compromise some mafia machine, is that OK with him? Or it was a government? 
                • Is Metasploit/Rapid7 bad? Isn't that what Metasploit is, a "weaponized exploit" framework? What about Canvas and all the other penetration testing tools?
                • If Congress can't even figure out how to regulate copyright violations without breaking the Internet, who on earth would even dream of suggesting they wade into a domain that's significantly more complex? 
                • His concern that Anonymous was going to hack some organization that bought an exploit, and use it is just a little silly. If they are able to hack into the organization that's buying "weaponized exploits" in the first place, it's pretty likely they don't need much help to wreck havoc. 
                Can't spend too much time on silly suggestions or poorly thought out ideas in our community as you'd have a new full time job, but some deserve to be called out! Doesn't mean thoughtful dialog on how to improve the situation isn't useful (one could argue, necessary!) but adding FUD to the mix isn't helpful.

                [Sep 2016 Update] Sounds like the US State Department and the Wassenaar Agreement folks agreed with his argument and proposed some disastrous rules making penetration testing and research tools export controlled. (So if you go to Blackhat and present on some new vulnerability with a POC and foreigners are in the audience you could be fined or go to jail!) Rapid7 has a politically correct writeup about some of the issues.  And of course Dave Aitel was writing about it non stop through the process on his mailing list and cyber security policy blog.  Fortunately the Wassenaar rules died, although I'm sure it will return again in some other form, just like Internet regulations have.
                There was an error in this gadget