Monday, October 15, 2007
2factor's RPM: is it secure?
The first company 2Factor claims to improve security by provinding a "secure applet" that continuously generates new keys for each transaction. The hypothesis is that "This session of the browser is completely controlled by the SecureWeb® software, eliminating the threat of hacking attacks". The assumption is probably that a new applet will avoid vulnerabilities in the browser and hence will be secure. However, this applet is only as secure as the hardware and OS it is running on. How does this scheme protect against rootkit based Trojans? How difficult is it to "hijack" the icon that it creates? ["Online customers perform a one-time download of a small file that places a bank-branded icon on their computer desktop. "] and replace it with a Trojan that invokes the applet in a sandbox and has access to all its state? How is the process of downloading this applet secured? Until I have answers to these question, I would be wary of this technology.
This technology may be useful for improving performance compared to SSL-PKI if it has a equally good way(or better) of generating keys which is faster. However, since crypto-accelerator hardware is so mature these days I doubt if the "performance story" alone would make this startup fly. Maybe the founders felt the same and that is why they cooked up the "security story" I criticized above. 2factor also claims "the advent of quantum computers will render current technology useless, RPM's underdetermined system of equations is PROVABLY secure". I don't think quantum computers are arriving anytime soon, but I would love to know if RPM has been evaluated by cryptographers against the claim of being provably secure.
2factor does have an "easier story" and that might indeed be a valuable advantage.
Add to Technorati Favorites
Friday, September 14, 2007
How not to handle data leaks: TD Ameritrade
"Please be assured that UserIDs and passwords are not included in this database, and we can confirm that your assets remain secure at TD AMERITRADE. "
This is good, but even if they were exposed:
- if the passwords were "hashed" with nonces (like they should be) I have nothing to fear.
- if they were stored in cleartext or without nonces and if I had a weak password, I could just go and change it.
He continues to say:
"You continue to be covered by our Asset Protection Guarantee..."
and that is good because even if the password scheme and password were weak and the crook logged in before I changed my password, I am protected. Awesome! And so far so good, except that I would expect this announcement to being with an apology for leaking my data.
But reading further I start to get nervous:
"While Social Security Numbers are stored in this particular database, we have no evidence to establish that they were retrieved or used to commit identity theft."
On their FAQ page they say:
"After extensive investigations involving outside forensics experts, we have no evidence that this sensitive personal information was taken. That is one of the reasons why we have also hired ID Analytics. Its initial investigation has concluded that there is no evidence of identity theft as a result of this issue.Because of our ongoing investigation, we will not provide additional details."
In another place they say:
"In fact, we have been able to conclude that this sensitive information belonging to our legacy TD Waterhouse retail and institutional clients was not retrieved."
I wonder how they established this and already alienated by the rest of the PR material I am inclined to believe that this is misinformation as well.
They use the terms "extensive", "initial", "continuing" to describe their investigation depending on what they are trying to say. They use "initial" and "continuing" when trying to convince me that they cannot tell me how the forensic experts reached the conclusions they did but they use "extensive" when they want to convince me that these conclusions have been reached.
TD Ameritrade having no evidence that my sensitive information was leaked or of identity theft does nothing to calm my nerves. The crooks could still have this information. They could have covered their tracks so that there is no evidence. They may have left behind evidence which TD Ameritrade will never find (infact TD Ameritrade has a lot to gain by not finding this evidence and a lot to loose by finding it). They may not have used this information yet knowing the heightened alert level right now. What stops them from using this information later? The legal system is clearly not on my side as depicted by this ruling in another data leak case:
"Without more than allegations of increased risk of future identity theft, the plaintiffs have not suffered a harm that the law is prepared to remedy."
How would I ever be able to tie a future ID theft to TD Ameritrade's leak?
Why can TD Ameritrade get away with this? This is because my security is not their concern. It is an externality for them. The only way to solve this recurring problem is to change that and no advance in security technology can change the law. Meanwhile, not using immutable and leakable information for authentication will help ease some of the pain.
Now there is news coverage (Sep 17)
http://www.darkreading.com/document.asp?doc_id=134056
http://www.wallstreetandtech.com/blog/archives/2007/09/why_td_ameritra.html
Add to Technorati Favorites
Tuesday, August 21, 2007
Another multi-core...
Here is a snippet from the article:
"what will make or break Tilera is not how many peak theoretical operations per second it's capable of (Tilera claims 192 billion 32-bit ops/sec), nor how energy-efficient its mesh network is, but how easy it is for programmers to extract performance from the device."
I agree. And by focusing on the development environment, ability to run SMP Linux etc, Tilera has chosen to offer a smooth learning curve to software engineers. Although there are some design wins, it remains to be seen whether taking the first few steps on this curve offer a significant advantage over the wares sold by Tilera's competitors: Intel, AMD, Cavium, RMI and Sun.
There is another good article here. An interesting point about pricing:
"For a new entry into the market, Tilera priced its product with confidence: 10K-tray pricing is set at $435 for each Tile64 – which appears cheap, if it can replace ten Xeon processors. But in a real world environment, the processor is priced against a quad-core Xeon 5345 (2.33 GHz, 8 MB L2 cache), which currently sells for a 1K tray price of $455. "
Add to Technorati Favorites
Tuesday, August 7, 2007
Chief blogging officers debating NAC
http://www.nevis-blog.com/2007/08/wondernac-i-lik.html
http://www.stillsecureafteralltheseyears.com/ashimmy/2007/08/more-on-the-won.html
Add to Technorati Favorites
Friday, July 20, 2007
Privacy@Google
Privacy international findings are here: http://www.privacyinternational.org/issues/internet/interimrankings.pdf
Add to Technorati Favorites
Saturday, July 7, 2007
security research, rootkits and TPM
The media limelight on these researchers also encourages "publicity stunts" and other "celebrity wars". Here is an example:
http://www.securityfocus.com/brief/537
http://www.matasano.com/log/895/joanna-we-can-detect-bluepill-let-us-prove-it/
I do believe that "theoretically" it is impossible to write an undetectable rootkit if the detection system is allowed access to the external world (network access is usually good enough). However, "practically" it is a contest between the rootkit engineer and the rootkit detector engineer. It is certainly possible although difficult to create a rootkit that will be very hard to detect. Similarly, it is possible but difficult to engineer a rootkit detector good enough to detect this rootkit.
Trusted Platform Module is a promising technology that might render the issue moot in the long run. However, TPM itself may have bugs in the beginning: http://www.networkworld.com/news/2007/062707-black-hat.html
I don't know why these guys withdrew...was it because they had found no exploits or were silenced by the TCG?
TPM may have some vulnerabilities in its specification and implementations of the specification will certainly have more. It is still a good technology because in a world with TPM bugs will be confined to a small area and therefore could be more easily found and fixed than in a world without TPM.
Add to Technorati Favorites
Tuesday, April 24, 2007
What is a Host IPS?
Most host IPS's use a mix of the following techniques to monitor program execution (sometimes called program "behavior"):
- intercept system calls
- intercept access to resources like registry, file system, libraries(DLLs)
- track origin of the code being executed
- execute the program in a sandbox...the extreme case is interpreting the program instruction by instruction instead of running it directly on the processor.
There are several compile time tools for making the stack difficult to overflow. These defend(not foolproof) against buffer overflows on a stack:
- stack guard
- stack shield
- stack ghost
...and a run time method
- program shepherding
Some commercially available Host IPS products are:
- McFee
- Cisco
- Symantec
- Determina
Add to Technorati Favorites
Wednesday, March 14, 2007
Methods for network based devices implementing data leak prevention
It is interesting to see how one can go about solving this problem. [Note: In this writeup I focus only on detecting leaks through the company's network. There is another ways in which information can be leaked: through storage devices like hard disks and USB flash. The techniques I mention here do not work for them.]
First we need to define sensitive data. A few items like social security numbers can be easily defined as regular expressions (ddd-dd-dddd, where d is a digit) and one can scan all network data for anything that looks like a social security number. But what about other information? We can apply the pattern matching approach to other structured information like patient records in a healthcare facility, account information in a bank etc. What about unstructured information like design documents or patent ideas communicated between team members in emails? It does not follow a pre-defined pattern. Is there a way of monitoring it? Fortunately, there is. One method is to use rabin fingerprints . Calculate these fingerprints for all potentially sensitive data and match it with the fingerprints calculated for network traffic. This method works well because even if the data was changed a little (like a section from a document was copy-pasted into an email etc) it is matched by the fingerprints.
An approach that combines pattern matching for known and/or structured data and fingerprinting for unstructured data works well in detecting unintended accidental data leaks in information passing through a company's network. A report says that 60% of the leaks reported so far are of this nature. So it is a useful approach. What about the other 40% intentional data theft? I will write about it another day but the first thing that will come into mind is to apply some kind of "locks" and "alarms". Locks in the digital world are cryptographic techniques and alarms are data access, modification and transmission logs.
[Some readers will note that I did not mention "watermarks". I consider that as a subset of structured data]
Add to Technorati Favorites
Tuesday, March 13, 2007
Sandboxes for false positives in IDS
However, as I said it is not an exhaustive method so it is often complemented by protocol anomaly detection and behavioral anomaly detection. Protocol anomaly detection(PAD) is a very reliable technique but a mere presence of an anomaly does not always indicate an intrusion. Behavioral anomaly detection(NBAD) is worse because it relies on unproven statistical models of network traffic and user behaviour. [Related reading ]. However, both of these methods are useful in locating "suspicious activity", protocol anomaly detection more so than behavioral. The challenge is to deal with the false positives they inevitably result in. A good solution to this problem is now available in atleast two commercial products: FireEye and CheckPoint's MCP which use "sandbox execution" of code extracted from anomalous traffic. The approach is in its infancy but is very promising because it is to intrusion detection what "blood tests"are to disease diagnosis. It has very low false positives and works against zero day threats. It is "expensive" because it requires a lot of computation if PAD and NBAD are used to narrow the search space of traffic, it can scale well.
I believe most commercial IDS/IPS and even anti-virus/spyware vendors will add this weapon to their arsenal this year. Thoughts?
Add to Technorati Favorites
Monday, February 12, 2007
Personal information leaks...
about VA losing data again, not reporting it quickly and making a completely useless but misleading remark while doing so: "...it doesn't have any reason to believe anyone has misused data...The agency offered a year of free credit monitoring to anyone whose information is compromised". Useless because if the information was misused VA won't be the first to know and if they did eventually learn that the information is misused they may take another 3 weeks to report it. Perhaps the motivation behind an announcement like this is that it may deter the miscreants from mischief for a year. The other comtemporary data leak (TJ Maxx) has shown that this is not true.
Misleading because they are offering 1 year free credit reporting which may give a false sense of security to those customers who use that service. Armed with the SSN and other sensitive information the miscreants can carry out their ill intents after a year. Also some of the mischief they do may not get into the credit report at all and the part which does will take a while before it does show up and at that time it might be too late (e.g. money is transferred to Bahamas etc. and nothing can be done now)
I have blogged about the problems of using SSNs and other "permanent" and personal information for authentication here http://securetheworld.blogspot.com/2007/01/social-security-numbers-as.html
Add to Technorati Favorites
Thursday, February 8, 2007
Microsoft's trusted ecosystem vision and its review at Dark Reading...
Tim claims that IPSec is insecure because it connects the endpoint to the whole network whereas SSL connects it to only a specific application. While there is some truth to this, IPSec VPN devices generally allow policies to be configured that can restrict access to specific applications.
One of the new frontiers in the security war is the internal corporate network. To secure them one of the things that need to be done is to authenticate endpoints connecting to it and enforce policies. This is being done by NAC (pre and post admission) but the authentication aspect is insecure today because in the absence of a cryptographically secured connection, endpoints can spoof their addresses and fool the NAC devices. I believe IPSec holds promise as a standard and proven technology to fix this problem. I am glad that Microsoft is thinking about this and if they integrate IPSec with 802.1x into Windows seamlessly, it will encourage switch vendors to add IPSec termination to switches and secure the corporate LAN.
Add to Technorati Favorites
Wednesday, January 31, 2007
IPS algorithms...
Comments on "Outer limits on IPS article at Dark Reading"
http://www.darkreading.com/blog.asp?blog_sectionid=403&WT.svl=blogger1_3
The author makes some good points about the limitations of IPSes. However, IPSes are not as useless as he claims. IPSes today use a variety of methods to prevent attacks. Signatures are used to block known "bad stuff". It is true the attacker can change his "bad stuff" to evade existing signatures but eventually signatures get updated and the attack is limited if not completely stopped. Meanwhile, anomaly detection is used to counter the new attacks that don't have signatures yet. There are two types of anomaly detection (actually three if you include host behavioural anomaly detection): protocol anomaly detection and network behavioral anomaly detection. The first one works quite well in blocking worms because most worms spread via buffer overflows. Checking network traffic for "too long" protocol fields and for other things like "executable code" in data fields will block most current worms.The second technique behavioural anomaly detection is useful to detect port scans (too many failed connections), password guess attempts (too many failed login attempts) etc but many vendors are using it to detect things like high bandwidth usage etc which will have too many false positives as the author correctly points out.
Add to Technorati Favorites
Friday, January 26, 2007
Social security numbers as authenticators...
They are like "pre-shared key" based authentication (actually much worse). It is well understood that pre-shared keys are fine for small scale use like your home wireless network and even then it is recommended that they be changed periodically. The case with SSNs is much worse: the same key is used as an authenticator over a person's whole lifetime and everywhere the person needs to authenticate himself: banks, rental leases, loans, employers... And it cannot be changed!
Add to Technorati Favorites
Wednesday, January 24, 2007
On Internal LAN security, switches and IPSes
Add to Technorati Favorites
Welcome to my blog !
I am an engineer who has been developing security products for about 7 years. In this blog I will write about information security from the point of view of an engineer who builds security products. I would love to hear your comments and opinion (especially from those who use these products) on my small essays.
Cheers,
Mohit