Linux has a reputation for stability and security. Keep it that way by following a few simple principles.
Linux and other UNIX-like operating systems, such as FreeBSD, have always been considered among the most secure and robust operating systems available. According to w3tech.com, UNIX (including Linux) serves up pages from 66.9% of the Internet's websites (as of December 21, 2013) with Linux's share being 31.7%. With that kind of exposure, security is truly the premier requirement and Linux can certainly deliver it.
Despite its excellent security, Linux can become vulnerable if an administrator becomes careless. In fact, the primary vector for compromising systems is misconfiguration, opening the door for exploitation of any vulnerabilities present in the OS or applications. By following some simple principles (many of which apply to all operating systems), administrators can harden the Linux-based server to withstand even sophisticated attacks. In this article, I'll explain my philosophy for Linux server hardening and in the process offer the novice Linux administrator some tips.
Limit the Tools
I'm a minimalist when it comes to installing Linux. I'm old enough to remember when 5MB hard drives were considered extravagant luxuries; thus I grew up in the day when one installed only the software packages that were necessary to fulfill the requirements of a server. If I was building a web server then, I'd only load httpd-related packages. Ditto for mail servers, database servers, file and print servers, and so on. The idea then was to minimize memory and disk storage requirements so as to minimize hardware cost.
Today we live in the era of cheap memory and even cheaper disk storage. The smallest base configuration of today's servers is frequently overkill for many applications. As a result, many people don't give much consideration to storage and as a result will go "install happy," loading virtually every package that their distribution offers under the premise that bytes are cheap and plentiful.
As it turns out, it's still a good idea to be a minimalist. The simple fact is that software vulnerabilities won't affect you if the errant software isn't installed. Those habits I developed early in my career still serve me well in this gigabyte/terabyte era. Granted, it's so much easier and more convenient to load everything than it is to be more selective, but with ease and convenience comes exposure.
My favored Linux distributions are Red Hat Enterprise Linux (RHEL) and its free derivative, CentOS. When I do a fresh install, I always de-select everything and then install only a "minimal system." This gets me a minimal install to which I can add the appropriate packages. Now this is not to say that I'm such a rabid purist that I install packages one by one; instead, I take advantage of "group installing" associated groups of packages.
For example, let's say I'm going to build a web server to host a Wordpress-based website. After the minimal system is installed and running, I drop to the command line and I start with this command:
yum groupinstall base
This gives me a basic Linux system. Since Wordpress requires a MySQL database server, and assuming that I'm not connecting to an existing database server, I'll install that software by issuing this command:
yum groupinstall 'MySQL Database server.'
To give me the Apache web server, this quick command is all that's required.
yum groupinstall 'Web Server'
At this point, I'll have a fully functional web server that just requires the addition of the Wordpress software. If during my configuration I find that a tool is missing, I just install it using yum. For those not familiar, yum is the package management tool for RPM-based distributions such as RHEL. Debian-based distributions, such as Ubuntu, use the "apt" tool in a manner similar to "yum."
Do You Know Where Your Software Has Been?
One of the amazing things about Linux is that if one were adventurous, s/he could build a complete Linux-based server totally from scratch. All of the code is GNU General Public License (GPL); thus the source code is available, so it's entirely within one's grasp to do so. I started using Linux somewhere around 1993, and I got the opportunity do a from-scratch installation. Let me tell you, while it was an interesting intellectual exercise, it's not one that I will soon want to repeat. It's tedious and error-prone, and on top of it all requires constant vigilance to ensure that any updates (particularly security updates) that come available are applied to your system and don't break anything. Lather, rinse, and repeat for every server you have deployed in this manner and you too can experience an incredible time sink.
That's why the concept of a Linux distribution has become so popular. All of that work has already been done for you. The code has been compiled, has been validated through integration testing, and is constantly being updated with the latest and greatest patches. You can pay for support (like with Red Hat) or go it on your own with minimal delay in updates (like with CentOS), but in either case a simple command will do your updates.
To give yourself the best chance for a secure system, you'll want to take advantage of your chosen distribution. Whenever possible, use software packages that are provided by your distribution. It may not be the most up-to-date version in terms of features, but it will be stable and continually updated. Don't be put off if you don't find a package name that matches the software you desire. In many cases, the package name differs from the software name. A classic example is the Apache web server. For RHEL and CentOS, its associated package name is "httpd," so a cursory glance at a package list might give the impression that Apache isn't included. Fortunately, the yum tool can be used to cross-reference the software name and package name. For Apache, issuing the following command will provide the correct package names.
yum search apache
So what do you do if the package you want or require isn't part of your base distribution? The next step is to see if there's some kind of extended repository (the place where packages are housed on the Internet) available. In these repositories, you'll find newer packages that have been tested to replace the stock versions. They're generally of high quality and are digitally signed as well, so you can be fairly trusting of their deployment. For RHEL, the extended repository is called EPEL, an acronym for Extra Packages for Enterprise Linux and is provided by the Fedora project. Fedora is the Red Hat breeding ground where new RHEL releases are born and raised. If you're using CentOS, you can avail yourself of that same repository or, better still, just enable the centosplus repository. Instructions for doing this are available on the EPEL website or at the CentOS website.
The joy of using repositories is that they make installation and updating simple. Yum will just as happily fetch required packages for any RPM install package, whether from the base distribution or an extended one. And if the package is in a repository, then a check for updates will pull those down when they're available. Repositories make maintenance so much easier than manual installation. The moment you add software that's outside of your distribution or unavailable in extended repos, you become responsible for its care and feeding. That means you'll need to ensure that all packages required to support the package are loaded on your system. For example, to complete the aforementioned Wordpress installation, I need to add PHP support to the system, which is added via this command:
yum groupinstall 'PHP Support'
My preferred method of installing and updating Wordpress is by using the project's subversion repository, so the subversion package is another requirement, installed this time by this command:
yum install subversion
Notice the difference? For PHP, I used the groupinstall subcommand of yum, while using only one package I just used the install subcommand. Although subversion makes the process of upgrading Wordpress rather simple, I do have the added burden of maintaining it separately from the operating system.
The bottom line for me is this: I'll always use stock software whenever I possibly can. Every package I add from "outside" sources adds a layer of potential vulnerability and the certainty of additional testing every time the software is updated.
SELinux
One of the most important security concepts applicable to all operating system is the principle of "least privilege," which states that a process should run with only enough privileges to accomplish its task, and no more. Failure to adhere to this simple principle is one of the reasons that early Windows machines were so readily compromised. The user was normally also an administrator of the machine, which makes software installation and maintenance easy, but which opens up a major vulnerability. I know many shops where this is still true, and as a result the IT staff is always busy cleaning up messes in spite of running top-shelf antivirus and malware scanners.
Modern Linux distributions all follow the principle of least privilege, typically starting services as the root user with full admin rights and then dropping the service to a non-privileged user once privileged operations (such as opening ports below 1024) are done. This model has been successful for many years, but it does rely on the competency of the administrator to not mis-configure something.
Enter Security-Enhanced Linux (SELinux): a mandatory access control system that moves security into the kernel. I first wrote about SELinux back in 2005. For a recap, SELinux is a contribution to the Linux kernel from the National Security Agency (NSA), the purpose of which is to add the concept of context to each service. In a nutshell, SELinux ensures that an httpd (web) service only does httpd-type things, an SMTP server only does SMTP-type things, and so on. The first time I saw SELinux demonstrated was at the annual Ohio Linuxfest in Columbus, Ohio, in 2004. Colin Walters, an employee from Red Hat who was working on the SELinux project, gave a demonstration of its power. He made his point by offering anyone in the audience $100 if they could get a list of users from the computer he had sitting at the front of the room. To stack the deck in the audience's favor, he had the machine signed on as the root user—normally capable of doing anything. Present in the audience was a smattering of the best Linux experts, each of whom gave it a go. In the end, Colin took home his $100. SELinux simply wouldn't permit any of them to get the required file.
As a result of that demonstration, I adopted SELinux into my toolkit and I make it part of every server installation that I do. I have personally seen the benefits as I often see in my security log hacking attempts that are being blocked. While it's true that SELinux adds some complexity to a server's configuration, the benefits far outweigh the trouble it takes to learn about it. Red Hat has gone to great lengths to create policies that make SELinux transparent to the administrator, and for the most part it just works. The only time you may run into difficulty is if you're loading software not from the repositories I talked about earlier. Generally, a quick Google search will yield the solution. Should your Google-Foo fail, you can always fall back on the audit2allow tool to help you write the required policy, allowing your software to work.
So here is where I step up onto my soapbox for a moment. I frequently see instructions for software that mention SELinux and then recommend that you disable it. Please. Do. Not. Do. That. Yes, I know that creating SELinux policies can occasionally be a daunting task. But to take the easy way out and simply disable it is to turn your back on a superb, industrial-strength security solution. If you have a problem with a piece of software and think that SELinux may be the problem, you can readily suspend it by issuing this command:
setenforce permissive
This will instruct the kernel to continue to generate security messages when something occurs that SELinux would have prohibited, but to allow the action to occur. Once you've completed your testing and worked out the changes you need to make to convince SELinux to permit whatever action is hanging you up, then fully enable it by issuing this command:
setenforce enforcing
I'll now step down from the soapbox.
One other observation about SELinux: At one time, having NSA involvement in a security project would have been quite an endorsement. Thanks to the revelations into the NSA by Edward Snowden, many might reconsider the value of that endorsement. If one were talking about Windows or any other closed-source operating system, then I'd be concerned. Thanks to the transparency provided by open-source software, you can be sure that geeks the world over have audited the software to death, ensuring that the NSA hasn't added anything that shouldn't be there.
To Recap
Keeping your Linux server secure is fairly straightforward if you follow some simple guidelines. First, keep your software inventory small. Install only the packages that you absolutely need to get the job done. Second, know where your software comes from. Select packages from your distribution's repositories whenever possible. Shop the extended repositories for newer versions of packages if the stock repositories aren't providing what you need. Only after you've exhausted those resources should you go outside the family. And no matter where the software comes from, ensure that you keep it up to date. Third, remember the principle of least privilege. Software services should have privilege only for as long as it takes for them to acquire the resources that they need. Then they should drop privileges to "mere mortal" status. Packages provided by the distribution already follow this principle; ensure that the others that you install follow it too. And finally, embrace SELinux. Invest some time to learn more about it, and don't fall for the installation instruction "Disable SELinux."
LATEST COMMENTS
MC Press Online