To give you some background, I'll quote an excerpt from Dancho Danchev:
Earlier this month, a mobile malware known as Transmitter.C, Sexy View, Sexy Space or SYMBOS_YXES.B, slipped through Symbian’s mobile code signing procedure, allowing it to act as a legitimate application with access to device critical functions such as access to the mobile network, and numerous other functions of the handset.What happened was some malicious group slipped one past the automated Symbian mobile code signing process (Express Signature, which doesn't require human analysis), causing a piece of Malware to receive a Symbian-signed digital signature. This problem doesn't scale well, as they currently have over 2,000 applications receieving a signature each month and they are trying to drastically increase that number to compete with Apple's iPhone.
Upon notification, the Symbian Foundation quickly revoked the certificate used by the bogus Chinese company XinZhongLi TianJin Co. Ltd, however, due to the fact the revocation check is turned off by default, the effect of the revocation remains questionable.
The problem points to the larger question of code validity, integrity and automated detection of malware in binaries. Even with extensive human analysis, an attacker can hide bad things in legitimate software, or fool/attack legitimate servers providing the code. In the cases we've seen this occuring it's often because the attacker makes a mistake or someone gets lucky and stumbles across it, not because the overall system is robust to attack.
There are numerous papers and projects out there trying to figure out how to automatically catch these types of attacks, (here, here, here, etc. but they are all bounded by the halting problem... it's not possible to build code that automatically determines what other code will do in all cases (as shown in Fred Cohen's 1984 thesis and follow on work by him and others. That said, it is certainly possible to catch lots of things most of the time... the question is how much and how often. DARPA has an interesting problem trying to automatically detect bad things in chips in the TRUST program. I haven't seen anyone try to figure out what the theoretical upper limit of these types of research efforts are, or frankly how to even quantify the problem sufficiently, that's where I'd be spending my energy if engaged in this area.
The other problem that the code signer community has to deal with is trust. Mikko Hyppönen from F-Secure says that "It shows the express signing process is not foolproof, but it's still much better than the apps not being signed at all." While that's probably a true statement, there is a big qualifier that goes with it... by digitally signing something and stating that it's valid/secure/trustworthy, you drastically change the equation on the part of the user when they install something. In today's Wild West model on the Internet most users know they cannot trust any application and they have to be cautious about the source, content, etc. When companies like Symbian are digitally signing applications as valid, when that trust is compromised you have to wonder if they are just doing it to ensure a monopoly/control over the platform and charge the application developers, or what liability they incur by inappropriately validating these third party applications?