Securing the Internal Network

By | December 20, 2004

The goal of this document is to define new guidelines in order to improve the security in Microsoft Windows-based internal networks. In order to be useful in real situations, these measures have been thought in function of obtaining the lowest-cost possible approach, to prevent such a project to become financially prohibitive. Security being a field in constant evolution, it is possible that new solutions will be integrated to these presented here in the future.


One of the first things you learn when you start looking into computer security is that about 80% of the attacks reported on networks come from the inside, principally from fired or disgruntled employees, from external consultants or from malicious hackers that got inside the network one way or the other (non-secured Internet connection, plugged modems, social engineering, got hired by the victim under false pretensions, etc.). Since the demographical explosion of the Internet, this number tends to lower, but latest estimates still declare that between 60%-80% of network incidents happen on the internal network. However, the majority of computer security companies will put most of their efforts on securing the periphery of the network, while leaving the internal network itself completely open, either by a lack of consciousness about this problem, either by lack of competence, or more often either by lack of money to put up a project affecting all workstations on a network.

I could see for myself on a few occasions, while on duty, that once the periphery of the network is circumvented, the rest of the network is just like a big ripe fruit that we simply have to pick up. This is why it is imperative to define measures that will enhance the global security of computer networks, while trying to keep the costs as low as possible. This is possible with the help of optimizing the tools that are already in place and by automatizing the deployment process, in order to reduce direct human interaction which is prone to errors and costs a lot because it takes longer.

Definition of the multi-level approach

We often hear that computer security is not a product in itself, but more of a process that we constantly have to review due to the quick evolution of the technology and the vulnerabilities that might come with it. This is why it is recommended to implement a multi-layer security architecture on your network, in order to prevent having a single point of failure and to be able to block different kind of attacks. This is the strategy I chose to follow when I wrote this document. Not only the act of securing the internal network is part of a multi-layer strategy, but the multi-layer strategy will also be applied in order to secure this network, for the reasons mentioned above.

It is important to mention that the measures described in this document apply principally on securing Windows workstations. These measures can also be applied on Windows servers, but server administration implies other measures that are out of the scope of this document. Also out of the scope of this document are the measures to take to secure the periphery of your network, such as firewalls and IDS. Even if these measures are not covered in this document, it is important to take these into account in a global computer security strategy.

There is a common saying that says that there a conflict between ease-of-use and security. In order to have efficient result, the solutions proposed must find the balance between these two concepts. However, in the case of Microsoft Windows, I think there is enough fat in the ease-of-use side that we can cut in it generously and this way re-establish the balance between ease-of-use and security, which is otherwise dangerously inclined. The different concepts that I will explain later are in part derived from my previous whitepapers, and in part from recent experiences. For a more theoretical approach of these same concepts, I recommend the excellent paper “Protecting against the unknown” from Mixter. More precisely, this document will speak of antivirus protection, personal firewalls, securing the operating system and the various applications used on workstations, and various deployment techniques that can be used to facilitate the task.

Maximising antivirus protection

For a long time, it was believed that a good antivirus and a firewall was all that was necessary to protect efficiently a network. Of course, this is not true anymore (see “Autopsy of a successful intrusion”), but we must not neglect an antivirus solution in the means of securing our network. It is important to know that an antivirus software is not a panacea, and that it is easy for someone who knows about antivirus to circumvent such a software (which is why we are taking a multi-level approach), but it is even more important to know that in order to be efficient, the antivirus product have to be regularly updated and properly configured.

In most of the cases, default installation is the norm, and this kind of configuration usually leaves holes in terms of antivirus protection. Also, sometimes we see antivirus installed only on critical machines or servers, while each and every workstation on the network should be equipped with one, even if your mail server is equipped with antivirus and content-filtering products. All that is needed to compromise the security of a network is a single vulnerable machine, so it is necessary to define protection measures that take this reality into account.

In order to have maximum protection from your antivirus product, the chosen product must be able to scan files and programs in real-time in memory as well as files residing on the hard-disk. Practically all antivirus products offer this functionality nowadays. It is important to configure the antivirus product in order to scan for every type of file, since it is very easy to camouflage a virus to look like an innocuous file (I Love You, Life Stages et Anna.Kournikova are good examples, check “Invisible file extensions on Windows” in Appendix A for more technical details on the possible ways to do this). With the processing power available in today´s machines, there is no reason not to scan every file on a machine. You may have to put some exceptions however, depending on your environment (for example, I exclude for scanning my big 1 Gig .pgd encrypted disk file). If the software lets you do so, you should also scan compressed files. And if the antivirus software offers a heuristic detection engine on top of signature-matching, then you should enable this also.

So far, in this chapter, I only spoke of options regarding the protection of a machine against viruses, but what good is it to put in place protection measures if we are not in a position to evaluate their efficiency. One more time, default configuration is often in place, which means that in the best of cases, the software will write its log files on the local hard drive on which it is installed. However, some products gives you the possibility to chose the destination of your logs files, preferably on a central server (often a simple UNC path like \centralserversharedfolder will work), so I strongly recommend to use this functionality, as it will increase your staff capacity to understand and evaluate the scope of a virus infection when it happens, without having to hop from machine to machine to review log files. In a crisis situation, such a setup saves you time and gives you the global pictures, which is primordial while trying to stop the crisis. If the software also lets you send alerts by e-mail or pager, then it should be turned on. This will notify your staff as soon as an infection occurs, and from their desk they can easily check the centralized log files and make the call: simply an old virus that got cleaned on the way to the network, or a large-scale infection prompting for more immediate action? However, some products do not let you change the log files destination, which means that good products may be overlooked simply because they lack this feature. To solve this, there is LogAgent, a program written in Perl that will monitor log files for changes, and will forward these changes to a central location when it occurs.

The last aspect to consider is the updating of the antivirus definition files used by the software to identify the possible viruses that could try to get on your network. Because of the way that signature-matching works, if a virus signature is not included in the signature database, then chances are strong that it will go undetected (heuristics tries to solve this problem, but induce the possibility of false alarms). Usually, the software will be configured to be updated once a month, fetching its files directly from the vendor´s website. Depending on the level of paranoia expressed by your company (and the rapid growing rate of virulent activity), these should be done daily or weekly, and the updates should be done from an internal server, where the network administrator have previously put up-to-date files. This will prevent network congestion as all your workstation would all connect to the vendor´s website, which can be tricky during wide-scale virus attacks. I will cover later in this paper how to deploy your solutions on your network containing your custom configuration.

One last word relatively to virus protection: for the past 4 years or so, virus writers primarily focuses on exploiting some flaw in a well-known software in order to propagate their piece of malicious code, I named Outlook (and its cousin counterpart, Outlook Express). This software, which features various functionalities such as e-mail, agenda, calendar, and so on… that sports multiple vulnerabilities, makes it the number one choice for virus propagation. Before Outlook, it was considered impossible to get infected by a virus simply by reading e-mails. One had to open an attachment in order to be infected. Anyone pretending the opposite would quickly be made fun of and proved to his peers that he didn´t grasp the mechanics of computer science. This is not true anymore since the coming of Outlook, because of these new functionalities (others would say vulnerabilities) that makes it now possible.

It is very hard to secure Outlook in order to make it inoffensive, and on top of that, the default configuration (which is highly insecure) is the most used in companies. For these reasons, many companies will put in place several antivirus utilities on various points on the network architecture, but these utilities are for the most part useless against new, unknown threats. The analogy of a chain, where the weakest link is the one that will break when the chain breaks, is often applied in the world of computer security. By strengthening all the other links in your computer architecture (antivirus on servers and workstations, mail filtering, etc.), but keeping the weakest link on your network (Outlook), then you can only be sure that the chain will break with yet another wave of Outlook virus. I know that what I am saying here is not popular, but if you really make a big step forward in virus protection, ban Outlook and Outlook express from your network (and I point here the clients, not the Exchange server, which can be used with other mail clients).

I cover antivirus protection more in technical depth in the paper “Virus protection in a Microsoft Windows network, or How to stand a chance”, that you will find in Appendix A.

Setting up personal firewalls

For a bit more than 2 years now, a new kind of software made their appearance in the computer security market, personal firewalls. These are numerous and vary in their functioning from one product to the other. For this reason, I recommend that you make a good research of which products are available and evaluate how they work, in order to find which one best suits the needs of your company. There are some links in Appendix A pointing to pages containing several personal firewalls that you can download, along with evaluation from previous users.

So, as I was saying, personal firewalls don´t all behave the same, and it is on this point that I´d like to extend a bit. Let´s take for granted that there is a firewall protecting the internal network from the Internet. What would then be the advantage of installing a personal firewall on a PC that works on the same principles as the main firewall, that is a firewall that filters incoming and outgoing traffic based on rules defined on some characteristics of the concerned IP packets? A packet sent by a malicious person that achieve to bypass the firewall because it conforms to the rules put in place as all the chances to do the same when it will be confronted to the personal firewall, since the chances are great that the packet will also conform to the rules of the personal firewall, unless the rules from the two types of firewalls are sensibly different.

Another strategy, that I find particularly interesting, is a personal firewall that manages incoming and outgoing traffic based on the permissions set for the application requesting the connection, in opposition to the source and destination of IP addresses and ports. This type of firewall also makes a difference between the internal and external network, which makes it possible to obtain a good granularity on the type of traffic accepted or refused. On top of that, this type of firewall is made to stop right from the PC any connection attempt made by Trojan horses, denial-of-service agents, and some spyware. It is possible, for each application on the PC, to authorize, to refuse or to ask for permission for each connection, either on the internal or external network. It is possible to determine which applications have the permission to act as servers, which means that it can accept connections from other machines on a specific port. Applications not defined in the permission list will always ask for permission by default.

This way, if a Trojan horse gets on the PC via an e-mail attachment, it will never be able to receive the connection requests sent by the malicious hacker, even if this one is located on the internal network. The danger with this strategy is to be too permissive with your applications. For example, if we leave the command prompt FTP tool to be able to connect every time (because its convenient for the user who uses it often), then it is possible for a cracker to craft a Trojan horse that will use the FTP tool present on the victim PC to send collected information out of your network without triggering any alarm. Other scenarios using other common used software are possible, so in the end it comes down to the risk exposure you are OK to cope with. But still, be careful when designing your rules. At a minimum, all command prompt tools should at least ask for permission, as they offer no graphical hint of their usage. At least, this way, your personal firewalls will work in a complementing fashion with your main firewall, instead of just being a redundancy of the same strengths and weaknesses.

In order to increase your network security, I recommend to only include the various servers on your network as being the “internal network”. This way, it becomes impossible for a workstation to connect to another workstation on your IP network. This will force all electronic communications to transit via your servers (file server, print server, mail server, DNS, firewall, etc.) before getting to its destination, and makes it impossible(*) for an insider to hack into someone else´s PC by the network. For more information about this, I will refer you to “Configuring ZoneAlarm securely” in Appendix A.

Certain products will still let you associate specific ports to each application, which lets you one more degree of granularity in your setup. Of course, in order to be efficient, we must have a good idea of what is installed on the workstations on the network, which network these applications should be allowed to connect (for example, internal network only for your mail client, internal and external networks for your web browser), … By enumerating the applications allowed for network activity (that you should have detailed in your corporate security policy document), it then becomes easy to put standards that prohibits unwanted applications, such as chat clients, instant messaging, and the likes. Of course, to achieve this, the configuration has to be protected by a password.

As with antivirus, it is a wise choice to centralize your log files and keep and active eye on them. We will se later how to make pre-configured installation packets to deploy your personal firewalls effectively.

*(a note on “impossibility”: although I am aware it is a strong word to use in computer security, what I mean is that with such a setup, and a close eye on your centralized log files, if somebody tries and succeed an intrusion, you should normally be aware of it before he succeeds)

Optimising operating system security

Here we will discuss one of the most problematic aspects about securing the internal network, securing the operating system on each workstation on the network. This principally why that securing the internal network is often left undone, because it is a relatively complex task, and it traditionally needs to be done by hand, machine by machine, which implies high costs in workforce and is prone to errors. Corporate IT departments usually don´t have the required knowledge necessary to deploy securely-configured PCs in the first place, and even if it is the case, it often needs to checked and updated due to new vulnerabilities that keep coming out.

To give you an idea of the size of the task, you will find in Appendix A a link towards the Microsoft webpage containing the checklist of all the steps that need to be done in order to secure a default installation of Windows NT 4.0. The document the NSA published a while ago is also very informative to this effect. Among the things to do, there is the deactivation of the guest account, forcing a complex password for the local admin account, removal of unnecessary services and components (such as the Posix and OS/2 subsystems), restrict access to the LANManager hash, restrict access to folders and registry hives, applying service packs and fixes, just to name a few. The list is rather long, and it is easy to understand why this aspect is so often left aside: the time required to do all this manually an all PCs on a network is an enormous task.

In order to solve this problem, Pedestal Software created a graphical interface tool, called Security Expression, that lets you audit and configure remotely Windows NT and 2000 machines by comparing it to a set of pre-defined security policies that correspond to the secure configuration we wish to obtain (I tried to stay vendor-independent in this article, but I actually don´t know of another similar product. If you do, please let me know). Some sample configuration files comes with the program, which you can download from the company´s website for evaluation: one of the sample file corresponds to the recommendations made by the SANS Step-by-Step, another one corresponding to the “Microsoft Security White Paper”, and three others corresponding the standard US Navy configuration for workstations and servers. These files are redundant in the fact that they cover at least partially the same holes, but I prefer the Navy files, as they are more thorough, which you can modify to make suits your needs.

This software doesn´t need any installation of agents on the workstations. We only have to install it on a machine that is connected to the network (administrator´s machine is a good idea), and we simply have to give it the administrator´s login information of the domain we want to secure. The software will then proceed to a complete scan of the machines on the domain, matching their configuration against the security policy we want to implement. Once the scan is complete, the program presents an easy to understand report that shows item by item if the configuration complies with the security policy or not. With a single click of the mouse, we can start a similar process that will take care of modifying the workstations configuration to make it comply with the security policy, thus securing the various parts of the operating system on each workstation. We can also use Security Expression on a regular basis to test the integrity the configuration base, or to update new policies to cover newly discovered vulnerabilities.

Security Expression passes its requests by using the NetBIOS protocol, which is the basis protocol in a Microsoft Network, along with the administrator´s credentials, to audit and configure the workstations. It is also possible to create your own configuration files, which can be drafted from the sample files that come with the program. In its simplest usage, Security Expression can add, modify or delete registry keys, user accounts and groups, files or ACL´s, and probably a bit more. But if you want more flexibility, it is possible to include scripts or programs to give you more tools to deploy your secure configuration. You can also use this to deploy service packs and hotfixes, or other programs like the ones that we discussed above.

Optimising applications security

So far, we have taken steps to try to protect us against viruses, Trojan horses, DoS agents, and we have considerably secured the operating system environment in order to reduce the number of vulnerabilities a malicious hacker having physical access to the network could try to exploit. We could be thinking that our task is coming to an end, and that we have finally levelled the challenge of securing our internal network. But that would be wrong. We still have to take into account the various applications that the users need to conduct their daily business, which could also host several flaws that could compromise the security of our network. Remember the Outlook example I gave you in a preceding chapter? It is true that with all the steps we have taken so far to secure our network, it could be harder for a potential intruder to achieve its goal, but as long as there is an open door, there is always a way to make it open wider, and wider, up to the point to circumvent all our previously taken security measures.

Another application that needs a special attention is the web browser, either it be Internet Explorer, Netscape, Opera or other. It is important to reduce the capacities of this type of software, because it is an open window on your network. For example, it could be dangerous to accept blindly the execution of Java, Javascript or VBScript applets. Also, the acceptance of ActiveX controls is renowned as being non-secure, as these controls give the possibility to web authors (anyone) to execute code on your machine without restriction. So, it is important to take preventive steps to filter these possibilities, but still leaving enough room for an enjoyable web experience. Again, risk-exposure acceptance is a key factor here. E-mail applications also need similar adjustments, such as the de-activation of VBScript execution in HTML message for example. If you can, disable HTML mail altogether if you want to sleep tight at night.

In fact, each application software installed on your machines that connects in one way or the other on the network should be the object of specific research on how to remove known vulnerabilities. The same could be said of application software that has the capacity to execute code under one form or another. One such example is the popular word processing Microsoft Word, which have the ability to execute macros (and was at the origin of a new breed of viruses). Once the risk factor associated with each standard application on your internal network machines have been identified, and that the necessary changes have been thoroughly tested and approved, we can once again use Security Expression to deploy the configuration changes on existing machines.


In a security context, the ideal situation is to reformat the machines and reinstall everything from scratch and secure everything before putting the machines on the network, since we cannot establish 100% integrity of an unsecured network. However, in real life this is simply too costly for many companies, and is a huge task to undertake, not to mention lost productivity usually encountered in big deployment projects. So the next best thing is to secure the existing machines with the different tools covered so far in this paper, and take the bet that these new security measures will be able to stop, or at least detect, any previous security breach.

As we have seen above, it is very costly to make an enterprise-wide software deployment by going from machine to machine (I still often see it done this way), and it opens the door to human mistakes. In the case of a simple configuration change, we have seen that Security Expression was letting us do the changes remotely. It is also possible, with the use of scripts, to use it to deploy software. However, another approach that I favor particularly, is to create custom installation packages (with a software like InstallRite, which is free) according to our specifications. The installation of this custom package on a machine will not need any other effort to make its configuration match our specifications.

InstallRite works by taking a snapshot of all your hard disk and registry content, before and after the installation of your software, and identifies the changes made to the system by the installation (files or registry keys that have been added, removed or modified). It can then extract these files and registry keys and create a self-extract program that will automatically install the software with the desired configuration. The trick is of course to configure your software as you want it to be before taking the second snapshot of your system. So you can use this to deploy your pre-configured antivirus, personal firewalls and just any other productivity software you may want to deploy. The installation itself can then be launched from the login script or any other method you prefer.

Costs and savings

So far, I only have covered the technical aspect of such a project, neglecting the financial aspect for text clarity. But by now the reader should already have a good idea of where the costs are going to come from: software licenses. Indeed, you will have to account one software license per workstation on the network for each software that you want to deploy. Which is why it is important to reduce cost by checking which software can be reused (antivirus, for example) and by simplifying deployment procedures. The same is true for Security Expression, since its license is based on the number of machines of your network. For bigger networks, a corporate license is usually available and can offer a good licensing alternative.

The other cost-factor is workforce. Everybody knows it; qualified technical workforce is rare and expensive. This is why it is important to have an efficient deployment scheme to simplify the task and reduce the number of staff needed. We can easily count between 1 hour and 1 hour 1/2 per machine for a technician sitting at a machine and implementing mechanically all the things covered in this document, on top of the time necessary for the initial analysis phases of the project (identification of standard software, definition of configuration, tests, etc.). It is a complex and repetitive task that is error-prone and where mistakes can leave a big hole open in your network that you worked so hard to secure. By automating the task as we have seen in the preceding chapter, the same analysis phase is still necessary, but the deployment time can be drastically reduced to approximately 5-15 minutes per PC for the same amount of work, depending on various technical factors like processor time and network speed. Nonetheless, the savings in time and workforce is enormous, given the level of security obtained by these measures.

Integrated commercial solutions vs. independent products

I have treated in this document about various types of tools as independent entities towards each other. However, there are some commercial integrated solutions for workstation security, which includes an antivirus, a personal firewall, VPN, encryption and IDS system. These software are all optional components that can be added or modified at will via a common central interface that let you manage this suite of software. In fact, the graphical application that manages this multi-tool solution is not very different from what we have seen here, but has the advantage of showing a common interface and tool to configure and deploy all these solutions.

Even if an integrated solution has some advantages, it also has a few inconvenients. One of these inconvenients lies in the fact that the distribution of the software packages is more complicated than it should do, and you sometimes have to launch your installation routine a few times to cover all the machines in the network. Another inconvenient is that the interface that shows the log files doesn´t do anything more than simply showing the log text, and sometimes it does so in a clumsy way. It doesn´t beat looking at the log files with a good text editor. But the biggest problem is probably the fact that a vulnerability present in one component can mean that this vulnerability is also present in the other components of the suite (or at least some of them), which can be exploited to shut down the integrated solution altogether. Of course, using different products from different vendors doesn´t necessarily guarantee that such a thing can not happen, but a vulnerability present in one product have less chance to have an impact on the other products.


In this document, I wanted to discuss about a problematic in computer security that is often overlooked, either for technical or financial reasons: the security of the internal network. More than half (and even near 80% according to certain sources) of reported computer security incidents are done from the inside of the network, which is at least partially in contradiction with the measures traditionally implemented to secure a network, habitually against outside attacks (firewalls, IDS, content filters, …). Although these measures are necessary, they are for the most part useless in the scenario of an attack coming from the inside. They become useless as well if an outside intruder finds a way to circumvent them. The biggest challenge while securing a Windows-based internal network remains the complexity of the task and the volume of machines affected. For these reasons, the cost associated with this kind of project if often judged prohibitive, and are left aside as a result.

I have shown with this document that with the different tools available and with a little imagination, it is possible to obtain an appreciable increase in security on the internal network, for only a fraction of the price normally associated with this kind of work, which makes it affordable enough to interest companies who would like to protect their data assets.

Even more, the installation of such an infrastructure reduce considerably the volume of “noise traffic”, which should help increase the efficiency of intrusion detection systems (IDS) by reducing the number of false alarms. This aspect has not been tested, and I would like to have the opinion of IDS experts on this.

Leave a Reply