Monday, October 15, 2007

2factor's RPM: is it secure?

Network World just published a list of "10 IT Security Companies to watch".
The first company 2Factor claims to improve security by provinding a "secure applet" that continuously generates new keys for each transaction. The hypothesis is that "This session of the browser is completely controlled by the SecureWeb® software, eliminating the threat of hacking attacks". The assumption is probably that a new applet will avoid vulnerabilities in the browser and hence will be secure. However, this applet is only as secure as the hardware and OS it is running on. How does this scheme protect against rootkit based Trojans? How difficult is it to "hijack" the icon that it creates? ["Online customers perform a one-time download of a small file that places a bank-branded icon on their computer desktop. "] and replace it with a Trojan that invokes the applet in a sandbox and has access to all its state? How is the process of downloading this applet secured? Until I have answers to these question, I would be wary of this technology.
This technology may be useful for improving performance compared to SSL-PKI if it has a equally good way(or better) of generating keys which is faster. However, since crypto-accelerator hardware is so mature these days I doubt if the "performance story" alone would make this startup fly. Maybe the founders felt the same and that is why they cooked up the "security story" I criticized above. 2factor also claims "the advent of quantum computers will render current technology useless, RPM's underdetermined system of equations is PROVABLY secure". I don't think quantum computers are arriving anytime soon, but I would love to know if RPM has been evaluated by cryptographers against the claim of being provably secure.
2factor does have an "easier story" and that might indeed be a valuable advantage.



Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites


Friday, September 14, 2007

How not to handle data leaks: TD Ameritrade

I was greeted this morning by my broker's CEO. After telling me that he leaked my data he added:
"Please be assured that UserIDs and passwords are not included in this database, and we can confirm that your assets remain secure at TD AMERITRADE. "
This is good, but even if they were exposed:
- if the passwords were "hashed" with nonces (like they should be) I have nothing to fear.
- if they were stored in cleartext or without nonces and if I had a weak password, I could just go and change it.
He continues to say:
"You continue to be covered by our Asset Protection Guarantee..."
and that is good because even if the password scheme and password were weak and the crook logged in before I changed my password, I am protected. Awesome! And so far so good, except that I would expect this announcement to being with an apology for leaking my data.

But reading further I start to get nervous:
"While Social Security Numbers are stored in this particular database, we have no evidence to establish that they were retrieved or used to commit identity theft."
On their FAQ page they say:
"After extensive investigations involving outside forensics experts, we have no evidence that this sensitive personal information was taken. That is one of the reasons why we have also hired ID Analytics. Its initial investigation has concluded that there is no evidence of identity theft as a result of this issue.Because of our ongoing investigation, we will not provide additional details."
In another place they say:
"In fact, we have been able to conclude that this sensitive information belonging to our legacy TD Waterhouse retail and institutional clients was not retrieved."
I wonder how they established this and already alienated by the rest of the PR material I am inclined to believe that this is misinformation as well.

They use the terms "extensive", "initial", "continuing" to describe their investigation depending on what they are trying to say. They use "initial" and "continuing" when trying to convince me that they cannot tell me how the forensic experts reached the conclusions they did but they use "extensive" when they want to convince me that these conclusions have been reached.

TD Ameritrade having no evidence that my sensitive information was leaked or of identity theft does nothing to calm my nerves. The crooks could still have this information. They could have covered their tracks so that there is no evidence. They may have left behind evidence which TD Ameritrade will never find (infact TD Ameritrade has a lot to gain by not finding this evidence and a lot to loose by finding it). They may not have used this information yet knowing the heightened alert level right now. What stops them from using this information later? The legal system is clearly not on my side as depicted by this ruling in another data leak case:
"Without more than allegations of increased risk of future identity theft, the plaintiffs have not suffered a harm that the law is prepared to remedy."
How would I ever be able to tie a future ID theft to TD Ameritrade's leak?

Why can TD Ameritrade get away with this? This is because my security is not their concern. It is an externality for them. The only way to solve this recurring problem is to change that and no advance in security technology can change the law. Meanwhile, not using immutable and leakable information for authentication will help ease some of the pain.

Now there is news coverage (Sep 17)
http://www.darkreading.com/document.asp?doc_id=134056
http://www.wallstreetandtech.com/blog/archives/2007/09/why_td_ameritra.html


Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites


Tuesday, August 21, 2007

Another multi-core...

Tilera has come out of the closet. There is a good writeup about its 64 core processor here

Here is a snippet from the article:
"what will make or break Tilera is not how many peak theoretical operations per second it's capable of (Tilera claims 192 billion 32-bit ops/sec), nor how energy-efficient its mesh network is, but how easy it is for programmers to extract performance from the device."

I agree. And by focusing on the development environment, ability to run SMP Linux etc, Tilera has chosen to offer a smooth learning curve to software engineers. Although there are some design wins, it remains to be seen whether taking the first few steps on this curve offer a significant advantage over the wares sold by Tilera's competitors: Intel, AMD, Cavium, RMI and Sun.

There is another good article here. An interesting point about pricing:
"For a new entry into the market, Tilera priced its product with confidence: 10K-tray pricing is set at $435 for each Tile64 – which appears cheap, if it can replace ten Xeon processors. But in a real world environment, the processor is priced against a quad-core Xeon 5345 (2.33 GHz, 8 MB L2 cache), which currently sells for a 1K tray price of $455. "



Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites


Friday, July 20, 2007

Privacy@Google

While responding to concerns about privacy of those who use Google's services Eric Schmidt said that users worried about privacy can choose not to use Google's services. Not only does this response reflect sheer hubris on his part and alienate intelligent beings, it also sidesteps the real question "Should users of Google be worried about privacy?". Privacy international says YES.

Privacy international findings are here: http://www.privacyinternational.org/issues/internet/interimrankings.pdf



Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Saturday, July 7, 2007

security research, rootkits and TPM

The recent highlight on security research has encouraged a lot of "security researchers" (although the term is a bit too generic...these people are actually "vulnerability researchers"). Today's software contains a lot of security bugs and these researchers find a lot of them. It is a good thing...it helps raise awareness of the problem and pushes the software vendors to fix these bugs.
The media limelight on these researchers also encourages "publicity stunts" and other "celebrity wars". Here is an example:
http://www.securityfocus.com/brief/537
http://www.matasano.com/log/895/joanna-we-can-detect-bluepill-let-us-prove-it/

I do believe that "theoretically" it is impossible to write an undetectable rootkit if the detection system is allowed access to the external world (network access is usually good enough). However, "practically" it is a contest between the rootkit engineer and the rootkit detector engineer. It is certainly possible although difficult to create a rootkit that will be very hard to detect. Similarly, it is possible but difficult to engineer a rootkit detector good enough to detect this rootkit.

Trusted Platform Module is a promising technology that might render the issue moot in the long run. However, TPM itself may have bugs in the beginning: http://www.networkworld.com/news/2007/062707-black-hat.html
I don't know why these guys withdrew...was it because they had found no exploits or were silenced by the TCG?

TPM may have some vulnerabilities in its specification and implementations of the specification will certainly have more. It is still a good technology because in a world with TPM bugs will be confined to a small area and therefore could be more easily found and fixed than in a world without TPM.

Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Tuesday, April 24, 2007

What is a Host IPS?

The term Host IPS is used mostly to denote endpoint software that monitors execution of applications and looks for intrusions. Interestingly, the term Host IPS implies the absence of signatures whereas the term network IPS generally implies the presence of them :) Of course, Host IPS is generally supposed to complement signature based anti-virus and contemporary network IPS's utilize more techniques in addition to signatures.

Most host IPS's use a mix of the following techniques to monitor program execution (sometimes called program "behavior"):
- intercept system calls
- intercept access to resources like registry, file system, libraries(DLLs)
- track origin of the code being executed
- execute the program in a sandbox...the extreme case is interpreting the program instruction by instruction instead of running it directly on the processor.

There are several compile time tools for making the stack difficult to overflow. These defend(not foolproof) against buffer overflows on a stack:
- stack guard
- stack shield
- stack ghost

...and a run time method
- program shepherding

Some commercially available Host IPS products are:
- McFee
- Cisco
- Symantec
- Determina

Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Wednesday, March 14, 2007

Methods for network based devices implementing data leak prevention

I wrote about data leaks before . A painful problem like this is an opportunity for some and we now have quite a few startups selling products to monitor data leaving a company's network for sensitive information. Vontu and Reconnex are a couple of them. Port Authority was another that was acquired by WebSense.
It is interesting to see how one can go about solving this problem. [Note: In this writeup I focus only on detecting leaks through the company's network. There is another ways in which information can be leaked: through storage devices like hard disks and USB flash. The techniques I mention here do not work for them.]
First we need to define sensitive data. A few items like social security numbers can be easily defined as regular expressions (ddd-dd-dddd, where d is a digit) and one can scan all network data for anything that looks like a social security number. But what about other information? We can apply the pattern matching approach to other structured information like patient records in a healthcare facility, account information in a bank etc. What about unstructured information like design documents or patent ideas communicated between team members in emails? It does not follow a pre-defined pattern. Is there a way of monitoring it? Fortunately, there is. One method is to use rabin fingerprints . Calculate these fingerprints for all potentially sensitive data and match it with the fingerprints calculated for network traffic. This method works well because even if the data was changed a little (like a section from a document was copy-pasted into an email etc) it is matched by the fingerprints.
An approach that combines pattern matching for known and/or structured data and fingerprinting for unstructured data works well in detecting unintended accidental data leaks in information passing through a company's network. A report says that 60% of the leaks reported so far are of this nature. So it is a useful approach. What about the other 40% intentional data theft? I will write about it another day but the first thing that will come into mind is to apply some kind of "locks" and "alarms". Locks in the digital world are cryptographic techniques and alarms are data access, modification and transmission logs.

[Some readers will note that I did not mention "watermarks". I consider that as a subset of structured data]


Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Tuesday, March 13, 2007

Sandboxes for false positives in IDS

Signatures have been an effective but not exhaustive method for threat prevention for a long time. In the early days there were issues with false postives, then there were shortcomings in dealing with polymorphic threats but these were due to "bad signatures" and sometimes performance tradeoffs. That is no longer true [I wrote about various IDS algorithms before]
However, as I said it is not an exhaustive method so it is often complemented by protocol anomaly detection and behavioral anomaly detection. Protocol anomaly detection(PAD) is a very reliable technique but a mere presence of an anomaly does not always indicate an intrusion. Behavioral anomaly detection(NBAD) is worse because it relies on unproven statistical models of network traffic and user behaviour. [Related reading ]. However, both of these methods are useful in locating "suspicious activity", protocol anomaly detection more so than behavioral. The challenge is to deal with the false positives they inevitably result in. A good solution to this problem is now available in atleast two commercial products: FireEye and CheckPoint's MCP which use "sandbox execution" of code extracted from anomalous traffic. The approach is in its infancy but is very promising because it is to intrusion detection what "blood tests"are to disease diagnosis. It has very low false positives and works against zero day threats. It is "expensive" because it requires a lot of computation if PAD and NBAD are used to narrow the search space of traffic, it can scale well.
I believe most commercial IDS/IPS and even anti-virus/spyware vendors will add this weapon to their arsenal this year. Thoughts?

Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Monday, February 12, 2007

Personal information leaks...

I came across this news item http://news.yahoo.com/s/ap/20070213/ap_on_re_us/security_breach;_ylt=AmTRfUWSmPQsOMV3KG7fmoAEtbAF
about VA losing data again, not reporting it quickly and making a completely useless but misleading remark while doing so: "...it doesn't have any reason to believe anyone has misused data...The agency offered a year of free credit monitoring to anyone whose information is compromised". Useless because if the information was misused VA won't be the first to know and if they did eventually learn that the information is misused they may take another 3 weeks to report it. Perhaps the motivation behind an announcement like this is that it may deter the miscreants from mischief for a year. The other comtemporary data leak (TJ Maxx) has shown that this is not true.
Misleading because they are offering 1 year free credit reporting which may give a false sense of security to those customers who use that service. Armed with the SSN and other sensitive information the miscreants can carry out their ill intents after a year. Also some of the mischief they do may not get into the credit report at all and the part which does will take a while before it does show up and at that time it might be too late (e.g. money is transferred to Bahamas etc. and nothing can be done now)

I have blogged about the problems of using SSNs and other "permanent" and personal information for authentication here http://securetheworld.blogspot.com/2007/01/social-security-numbers-as.html



Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Thursday, February 8, 2007

Microsoft's trusted ecosystem vision and its review at Dark Reading...

One of the technologies Bill Gates mentioned as part of Microsoft’s “trust ecosystem” was IPSec. [http://www.microsoft.com/presspass/exec/billg/speeches/2006/02-14RSA06.mspx ]. Tim Wilson at Dark Reading believes that it is an unproven technology ;) and SSL is better. I would like to point out to him that IPSec has been around for a very long time and there are hundreds of good products around. The only reason why SSL became more popular as a VPN method in recent years is because web browsers have it builtin and a lot of applications people were interested in were web based. If Microsoft had provided an easy to use IPSec client in Windows from the beginning it could have been different. As far as core technology is concerned there is both IPSec and SSL use pretty much the same cryptographic algorithms and therefore are equally secure. Since it runs at the network layer IPSec can support almost all applications while SSL is restricted to those that use TCP. While that covers a lot of applications, it does not cover VoIP which uses UDP. In addition IPSec scales well as it does not require the device to terminate TCP. It allows multiple sessions on a single channel resulting in better scalability in terms of the number of concurrent channels a VPN terminating device has to support and the number of key exchanges that need to be done.
Tim claims that IPSec is insecure because it connects the endpoint to the whole network whereas SSL connects it to only a specific application. While there is some truth to this, IPSec VPN devices generally allow policies to be configured that can restrict access to specific applications.
One of the new frontiers in the security war is the internal corporate network. To secure them one of the things that need to be done is to authenticate endpoints connecting to it and enforce policies. This is being done by NAC (pre and post admission) but the authentication aspect is insecure today because in the absence of a cryptographically secured connection, endpoints can spoof their addresses and fool the NAC devices. I believe IPSec holds promise as a standard and proven technology to fix this problem. I am glad that Microsoft is thinking about this and if they integrate IPSec with 802.1x into Windows seamlessly, it will encourage switch vendors to add IPSec termination to switches and secure the corporate LAN.


Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Wednesday, January 31, 2007

IPS algorithms...

Comments on "Outer limits on IPS article at Dark Reading"

http://www.darkreading.com/blog.asp?blog_sectionid=403&WT.svl=blogger1_3

The author makes some good points about the limitations of IPSes. However, IPSes are not as useless as he claims. IPSes today use a variety of methods to prevent attacks. Signatures are used to block known "bad stuff". It is true the attacker can change his "bad stuff" to evade existing signatures but eventually signatures get updated and the attack is limited if not completely stopped. Meanwhile, anomaly detection is used to counter the new attacks that don't have signatures yet. There are two types of anomaly detection (actually three if you include host behavioural anomaly detection): protocol anomaly detection and network behavioral anomaly detection. The first one works quite well in blocking worms because most worms spread via buffer overflows. Checking network traffic for "too long" protocol fields and for other things like "executable code" in data fields will block most current worms.The second technique behavioural anomaly detection is useful to detect port scans (too many failed connections), password guess attempts (too many failed login attempts) etc but many vendors are using it to detect things like high bandwidth usage etc which will have too many false positives as the author correctly points out.




Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Friday, January 26, 2007

Social security numbers as authenticators...

It troubles me when I see that inspite of all the noise about identity theft, nothing is being done to fix the basic broken element in the system: the use of social security numbers and other personal information like mother's maiden name to authenticate people. SSNs may have served a purpose as an interim solution for authentication until a "real" solution was found but they don't scale well.
They are like "pre-shared key" based authentication (actually much worse). It is well understood that pre-shared keys are fine for small scale use like your home wireless network and even then it is recommended that they be changed periodically. The case with SSNs is much worse: the same key is used as an authenticator over a person's whole lifetime and everywhere the person needs to authenticate himself: banks, rental leases, loans, employers... And it cannot be changed!


Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Wednesday, January 24, 2007

On Internal LAN security, switches and IPSes

I have heard some security researchers claim that since anti-virus (and anti-spyware and other anti-X software) software exists on hosts, the network just needs to make sure that it is working properly. That is indeed a core idea behind NAC (or NAP if you prefer) but as the more pragmatic security professionals will observe, security needs a layered approach. This is because no individual layer of security is foolproof; there is always a chance that it will fail. Consider for example, rootkit based spyware that can avoid detection by anti-X software on a host but a network based monitor (a specialty appliance or embedded in a switch, router, firewall or IPS) can detect it by looking at the network traffic from/to that host. Indeed rootkit based spyware and anti-X software evasion are expected to be dominant problems this year [ http://searchsecurity.techtarget.com/tip/0,289483,sid14_gci1238948,00.html?track=NL-494&ad=577800&Offer=SEbpd124&asrc=EM_UTC_938379&uid=5726676 ] Multiple layers reduce the chances of security failure because the probability that all of them will fail at the same time is low. Of course, adding layers increases the cost and complexity while making management difficult so we cannot have hundreds of them. But it is fairly easy to see that we need at least two layers: one in the network and one on the host. It would be a mistake however to assume that these two layers will have completely complementary, non-overlapping functionalities. Both will tend to use similar algorithmic techniques like pattern matching (aka signatures) for known malware, protocol anomaly detection and behavioral anomaly detection for unknown malware. They may differ in their set of patterns, protocols or behavior models and in platform specific implementation optimizations but for engineers building them they are essentially similar techniques. They may differ in their input sources e.g host based systems will scan memory and files whereas network based systems will scan network traffic. So my point is that we will see the role of the network in NAC get augmented with more IPS-like features. Firewall and IPS vendors will start getting traction in the internal network (they are mostly deployed at perimeter today) but will face new challenges unique to the internal network like higher performance requirements, need for integration with other intranet infrastructure (like switches, directory servers etc.) and a difference in the application landscape (like CIFS, CVS, J2EE etc which are mostly not seen as much in the perimeter). On the other hand switch vendors will start adding firewall and IPS-like features to their switches but face challenges developing switch architectures to allow the "programmability" and "deep processing" that is needed.

Add to Technorati Favorites
Add to Technorati Favorites
Add to Technorati Favorites

Welcome to my blog !

Hello Readers,
I am an engineer who has been developing security products for about 7 years. In this blog I will write about information security from the point of view of an engineer who builds security products. I would love to hear your comments and opinion (especially from those who use these products) on my small essays.

Cheers,
Mohit