Alternate security ways or keeping in mind human factor!
When I have been administering for the first time (it was the work at internet service provider) I noticed that people intent to use mainstream software on their servers. I did not understand the reasons first.
It seemd that programs used by masses have more chances to become relatively bug free, therefore secure. Needless to say, that source availability is a major condition which makes possible early fixes and updates.
This is a quote from Linus’s law dedicated article at wikipedia:
“Linus’ Law according to Eric S. Raymond states that “given enough eyeballs, all bugs are shallow”. More formally: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” The rule was formulated and named by Eric S. Raymond in his essay “The Cathedral and the Bazaar”.”
In case of proprietary software, people often believe that if it is widely used then it is good software. “Why all the people use it if it isn’t good?” – often ask them. They also think that in trite software vulnerabilities will be most probably found, therefore vendor will probably supply patches.
But let’s try to think and face reality.
If you rely upon software vendor then you should use branded tools for software maintanance. Those tools come with operating system and provide simplified ways to install and update tested, and therefore probably stable and bug-free software. Then you suppose that apt-get, yum or Windows Update is all what you need to be sure your software is up to date and contains no wellknown vulnerabilities
It is of course pleasure to use automated updates and be sure that software will still work after applying them.
Large community of free software users, contains people who are not only able to discover software mistakes but also able to fix them. Indeed, fixes often became available in a very short time as source patches.
But this doesn’t always mean that new release will follow immediatelly after vulnerability have been discovered, and patch have been prepared. Often new release contains several bug fixes and issued after testing period. As a rule only after that vendor company come to scene and prepare a package which will be used during automated update. It may contain pre and postinstall scripts, and distribution specific patches. Therefore it again needs testing.
What I want to say is that there is a gap between the time when exploit have been published and software have been updated in vendor repositories. During that time many servers in the net are defenceless.
Matter of course that in case of proprietary software, even when you know the bug, have the exploit, know how it works, you can do nothing but only use that flow. You also free to write a mail messages to your vendor every hour asking him for patch or an updated version. The fact is that proprietary software often remains vulnerable for much longer time than open source.
But not all the people tend to use precompiled software from distribution vendors. Another approach is used in many companies: compile server software without using distribution package management. This case software is harder to maintain, installation needs more time, and there will be no distribution specific patches. For instance many companies use not only compiled themself server applications such as mail or web, but even their own custom compiled kernels (In case of GNU/Linux as a server platform).
In other words, there is almost no benefit and therefore no sence in using this or that particular operating system distribution in this case, perhaps early manual updates. Updates which could be done earlier than ones made by vendor.
Of course even relatively big group of administrators cannot ensure that software they have been patched, configured and compiled is stable enough because they have no such resources as distribution maker. The latter not only have a QA department but also a wide community of enthusiastic testers and maintainers.
It is indeed far away from being the fact that server software compiled by administrator is more stable, or bug free, than the one supplied by software vendor.
That is why in companies, where non automatic software management are accepted as a rule, technical supervisors and administrators are trying to avoid updates. If you hear a phrase like: “do not necessary to touch if it works well enough” then you may be pretty sure that pronouncer forced to compile and upgrade software by himself. Manual updates are of course doable but require greater overhead charge and could be followed by hardly foreseen consequences.
That is the another reason of old, buggy software presence on many servers.
It is very important to mention that commonly used software also usually feature rich. That is because software vendors decide to supply and support feature rich software. It is of course easier to support one web server rather than three different ones. Vendors choose applications which cover possible demands of all clients, who in their turn, usually choose that software because it is proposed by vendor. You may remember story with Internet Explorer and Media Player from Microsoft. “Apache” http server presence on most of the Unix web servers which serve just static html pages and nothing else, clearly shows described above tendency.
In the same time, many of us believe that complicated software has less chances to be bug free. Minimalist, simple software is as a rule more stable and sometimes works faster. Many computer cience teachers persuade their pupils that instead of crawl out with debugging it is better to use simple and obvious algorithm.
Indeed, most of the people who use particular server product don’t even know all the special features provided by that. Very often they just need simple and limited functionality. Functionality which could be offered by other simpler and probably, unknown application.
Moreover, if you follow the security related sites you have been apparently noticed countless/infinite number of published mainstream software exploits. Those exploits waiting to be used while many servers are waiting to be hacked.
There are no, or, sometimes, almost no exploits for rarely used software.
Such reflections lead me to the conclusion that wide spreaded software cannot be supposed as very reliable and secure though it still remains feature rich. In case your company doesn’t really need all that set of supported opportunities it is sometimes better to use non-mainstream software.
When piece of software is not in the common use then probability that someone could work on hacking it dramatically decreases. The less it is known, the more it is secure despite of all security flows it may possibly have. Such a software could be found on sourceforge, and similar sites. You could find a lot of links to a simple server solutions by searching through the developers’forums and newsgroups.
Unknown flow of unknown software could be used by experienced and advanced hacker only if he under some circumstances wants to attack your particular server, moreother, have necessary qualifications and time to make researches to find your particular software vulnerabilities. That work is much harder than usage of published and tested exploit. Experienced hacker won’t spend time to hack non-mainstream software of non famous site. It won’t give him fame or money. And even if he try to hack, then his chances to succeed are very low because simple applications contain less bugs, thus less vulnerabilities. He will hack without any prompt and idea of how it works. And, if it’s unique, i. e. have been written by yourself, or by your order, then his chances are incredibly low.
Therefore, it is wise to write your own software. In this case it will be unique, and therefore impenetrable and impregnable. That means there is almost no chance your software could be hacked ever. Sometimes even dos attacks could be blow out with unpredictable simplicity. Of course this way you had better neither share your software with anyone, nor release it’s source. If noone but you uses it then noone will need to modify or fix it instead of you. And of course most probably your unique application will be simple, and cover only your needs. If someone needs software with the similar minimalist functionality he could write it by himself.
That explains why there were almost no demand from society even if you share it. I am sure that most of the people who read this article will continue to use mainstream software.
It is worth to mention that it is always better to masque your server application so it will be harder or even impossible to guess what particular application or version used. In most cases that means removing welcome strings from web, ftp, ssh, telnet server messages. For instance, if you see default web server welcome or error message then most probably you can know its version and sometimes operating system distribution installed on the server. You can easily confuse the hacker by using wrong welcome screens from other software. You may also jokingly try to use completely nonsense or funny messages.
The only danger is that your product may happen to have the same flow as software you simulate. This may be even pure buffer overflow. Evidently you should write your software in a way it contain no flows. Read Secure programming howto before starting. If your software unique that doesn’t mean you could afford awful mistakes when designing and implementing it.
Unless you really need it, try to not use hackneyed software. It could be easily hacked. May be there is a sence to find something which better suits your needs and has no superflous features. In case you are a software developer, or computer science student, which is probably, if you are reading this article, may be it is worth to write server software by yourself rather than using trites and templates.
It is exactly the case when inventing a simple wheel doesn’t cost too much time but quite the reverse is very useful and advantageous.