EXPERT PAGES

Server and Network Issues by Bill Doyle

Home Initial Public Offerings E Business Applications Systems Evolution Network & Server Open Source Log Network Computing   Agent Screening Last Update January 20, 2005

By the time Bill Doyle received his AAS in Computer Science from Central Texas College and joined American Amicable Life as a mainframe programmer, he had been programming PCs since he was 13 years old. When his new Commodore VIC-20 arrived he read the brief manual and taught himself to program it in BASIC, which it supported natively. He had upgraded to a Commodore 64 when he hosted his first BBS, written in C.

At Amicable he also assumed responsibility for systems, migrating the mainframe to VSE/ESA OS, EPIC disk/tape disk management, and other third party products. In 1994 he moved to American Income to return to his first love, the PC, operating the Help Desk for the field agents and troubleshooting the home office PCs. He next shifted to PC and network system design and programming, with responsibility for the network and internet servers. He returned to Amicable in 2000 as manager of all network and internet services. He continues to program, and wrote Agent EFile, which makes reports available over the internet to the agents.


Q. Bill, if a company decides to provide its network services internally, they are going to need someone who can do everything from running the cat5 cable, choosing equipment, and installing and maintaining the network and the servers, not to mention programming. How tough is it going to be to get someone who can do all that?

I don't consider it that difficult to run a network with a file server, and even with a mail and web server, but it does require knowledge of a lot of different elements. You have to handle selection and registration of your domain names, understand the DNS servers whether you elect to run one yourself or not, select and purchase adequate Internet bandwidth, usually from several sources, select and install your physical server machines and the operating system, and the same for the file server, the email server and the web server. Plus you get to crawl through the ceiling a lot to get the cable where it needs to go. If a company does not have someone on staff that can readily do these things, it really should stick with a local service to take care of the network and the file server, and leave the web and email to a professional hosting service such as Dan provides.


Q. I think we can assume that most small to medium size companies are currently running at least a network and a file server, and probably Windows NT as the OS. A lot will be hosting email and a web site, if they have one, with an outside service. If you walked into a company like that would you advise with respect to the operating system? Upgrade to Windows2000? I see that Windows2003 is already available.

I think a company that doesn't already have a huge commitment to the Windows environment is very fortunate. The basic system that runs your computer, whether it is a server machine or a desktop, is the operating system. That can be Unix, Windows, or Linux. Often you find Unix on the big expensive server machines (like from Sun), Windows or Linux on the smaller machines. Linux is of course the free one. If you run that, then Samba is a free open source product that runs on Linux to emulate a NT file/print server. Linux is now fully competent, and any other choice is expensive. For a company with 200 computers, upgrading the operating system for the file server to Windows 2000 would cost about $10,000. Windows 2000 Server itself would be $849. For each desktop PC access requires a CAL (Client Access License) which is $157 a 5 pack. So instead you can download Samba and Linux free, and your server will look and act like NT 4.0 to the users and no CALs required.


Q. In making the transition to open source servers, it seems to me you would want to take it a step at a time, particularly if you were already doing more on windows than just the file server. Would it be good to run both Linux and Windows during this period, or perhaps switch just a few PC clients to Linux and Samba at a time until everything is functioning perfectly?

You generally have several server boxes, and you may need to stay with at least one Windows box because of windows server software you already own, such as the email client, or because your web sites used a lot of ASPs. Active Server Page scripting will only run on windows machines. That is a good reason for preferring PHP, an open source server side scripting which will run on all operating systems when supported. But for your file/print server, with the expensive CALs, it makes good sense to run Samba on a Linux box. Certainly during the transition I would first put only my test PC on the Linux box, and after I am satisfied that is solid, add a few users to give me feedback, and then switch the rest.


Q. The financial argument for using Linux and Samba for your network is certainly obvious. But there seems to be a general feeling that Linux is more difficult to install and maintain than a Windows product, that more people are familiar with Windows, and that perhaps there may be less available support. Have these concerns been outmoded by the rapid acceptance of Linux, and support by companies like IBM? To put it in perspective, suppose the company is currently running a Windows OS, and has a person who has been successfully handling the network. Should management be encouraging a change to Linux? Unless management does, I doubt if many network people are going to take the risk of disrupting something that is already working just to save money.

Certainly more people are familiar with Windows. Windows controls something like 90% of the desktops, so there are a lot of people who can navigate around a Windows client. The Windows Server OS is so similar that it is sometimes difficult to tell the difference. As a result, almost anyone who is facile with computers can operate a Windows server. Linux is quite a bit more difficult to install and operate, mainly because it still involves a lot of pieces. Windows has integrated all of the different components into one. Linux hasn't and probably won't. On the other hand, there are advantages to that. If you don't like the graphical desktop on Linux you can replace it with a different one. You can't do that with Windows.

I suspect it would not be hard to get the support of top management for a switch to open source. Most are not technical enough to appreciate the significance of running a Linux server instead of a Windows server. The cost factors would appeal to them. The hesitation would probably be closer to home. The head of IT has a greater appreciation of the challenges and will be more concerned about support and personnel. So the biggest thing helping Linux grab a foot hold is that it is free. Budgets can be Linux's greatest ally. For example, if IT has been asked to install a new server but the cost has not been figured into the current budget, IT may well choose a Linux server to accomplish the project at no cost.


Q. While we are on the subject of cost, I know you keep your server boxes up to date at very little cost. You build your own, so would you spell out what you like to see in a good server machine, how much is the cost, and whether there is any difference in the machines you use for the various servers? Is the major difference between a server machine and the top of the line desktop mainly the hard drives, or is there more to it?

This is another good question. When I started working with computers I read how there where desktop class PCs and Server class PCs. Server PCs always cost 3 times as much as a desktop, but it was difficult to find any significant difference between the two. Generally it boils down hot swapable SCSI hard drives, RAID, and lots of RAM.

Servers handle a load. They are the central point of your business. So they usually have better parts, in the sense that the stated mean time to failure is longer. In my view, most companies do not need the extra quality. It really comes down to the load that is placed upon the machine. A computer that easily handles 300 light users might only be able to handle 10 heavy file hungry users. In our office, we use standard PCs. The only differences are memory (1GB) and SCSI hard drives (Ultra 160). We understand that a server may go down due to hardware failure. Since we have desktops that have the same parts as the server, we have backup equipment available (although the users hate to see us coming with a screw driver).

As for cost, we stick our servers in really big cases. These cases have 12 bays to handle hard drives and there are extra fans to help dispel the extra heat. Since our servers are just PCs with extra server parts, we start with the cost of our standard PC, which is $448, made up as follows:
MSI KT4VL motherboard $76
AMD XP 2200+ CPU (1.8Ghz) $105
PC2700 DDR 512MB $92 x 2
Floppy $7
CD-ROM $17
ATI 7500 Radeon Video $59

Then we add $802 of server parts, making the total cost of the server $1250, as follows:
The case is an Enlight 8950 Entry Level Server case $202
Adaptec 29320-R SCSI controller $239
Ultra320 SCSI HD 73GB 10,000RPM $361


Q. You mentioned RAID and hot swapable drives as equipment that distinguish a server machine from a PC, but I don't see them on your equipment list above. Did you omit those?

The current servers we are running have RAID 0, which is mirroring two hard drives. we have two 9 GB, two 18GB and two 73GB drives. We haven't decided whether we will continue RAID. The negative is that it takes twin drives to do the job of one since everything is carried on both drives. Hard drives generally do not just fail, at least without generating errors or making funny noises as a warning. If one does, it will not take long to replace it and do a restore to get it back online.
We decided against Hot Swapable due to the cost. They really drive up the cost because the circuitry needs to be able to handle the power coming on and off when you pull the drive out. Hot Swap is really for something like a hospital that can not be down even for a minute. We can handle some down time. It is an inconvenience, but an acceptable trade-off to the cost.


Q. When a company moves from just a file server internally to a web server, the matter of the internet connection becomes critical. We know a simple modem isn't going to work, and most companies probably have more than that just for web access. Is cable or DSL going to be workable for a web server?

There is a huge difference between network speeds and internet connection speeds. Networks can run at 10Mbps, 100Mbps or Gigabit (1000Mbps). I would suspect most are running 100Mbps, with a Gigabit connecting the servers. Cable and DSL will download at between 1.5 and 2 mbps. That speed would work if it was both ways, but unfortunately both cable and DSL SEND at only 300 kbps, which is 3/10 of one mbps. For home use you are downloading web pages but sending only small requests, so that works. But when a business operates a web server, it is sending web pages, so the send speed has to be as high as the reverse. To send web pages with acceptable download times on the other end you are going to need at least one T1, which transmits and receives at 1.544 mbps.


Q. OK, at least one T1. How much is that going to cost? And what about redundancy? I can't afford to have my connection to the field force down, and sometimes a T1 service will be interrupted. What do I do for backup?

A T1 generally has two parts to its cost: 1) the cost of the service provided by your ISP, which is for its connection to the internet, and 2) the cost of the physical T1 line from your building to the ISP's POP (Point of Presence). Our first T1 at American Income went from Waco to Dallas, which was the closest POP. We where spending $500 for the ISP for a limited usage connection to the internet. It was something like a 10GB transfer limit which we never got close to using up. Another $1000 went for the T1 between our building and the POP, which went through 3 different phone companies. After a year or so, the ISP acquired a connection in Waco. So at that time, we where able shift the T1 from Dallas to a local POP in Waco. Our monthly bill then dropped to $825 month. The same $500 for the usage, but now only $325 for the T1. Most businesses today should be able to find a local POP.

Keeping a consistent connection to the internet is challenging. There are a number of ways to accomplish this, but one method is to have multiple connections. A good approach is to use a T1 for your main connection and a DSL as a backup. In some areas you can acquire a SDSL line. This is symmetrical DSL, which basically means it sends as fast as it can receive. The servers would have both connections wired to them. If the main connection goes down, you then route your traffic on the backup. An alternative is to use Hosting company for your web server, and possibly email. Then your building's connection to the internet would not have to be as fast, and an interruption in your service to the building (but not to your policyholders and agents) would not be so critical and you could avoid the necessity of a backup.


Q. Which brings us to the subject of firewalls. Once your network is connected to the internet, a hacker anywhere in the world can get to any machine on the network, unless you have an effective firewall. You use a proxy server to provide this protection. The word "server" sometimes refers to a machine and sometimes to the server software running on it. Which is a proxy server, and how does it work? If set up right, is it perfect security?

A hacker can only get into a computer if there is a piece of software running on the computer accepting connections. Unfortunately, Windows has a number of ports listening for connections. Some are for basic file sharing, which may be useful in a home, but dangerous when exposed to the internet. Hackers have found exploits that allow them to use these connections, so you need to limit access to these ports with a firewall. A firewall can be hardware or software. When configuring a firewall you specify what traffic is allowed to come in on this port. You can block the port completely, or list permitted source IP addresses.

I would love to say that there is perfectly secure method, but there are always bugs in code to be uncovered. Generally the exposure is slim, because it would come from a deliberate attempt to get in, not normal traffic. We put up what defenses we can and try to keep educated on what needs to be done to keep secure.

We use a proxy server to provide access for internal computers to the internet. The proxy server is an older method. Most people use NAT, Network Address Translation, now. NAT is easier to set up, but doesn't allow the administrator as much control as a proxy server. NAT can be installed without having to configure each piece of software at the users desk, but allows the user to freely download programs such as Realplayer. You can end up with everyone on your network doing streaming audio. With the proxy server, I have to set up a port for internal traffic to talk to the internet and I can control what is downloaded.

Our proxy server basically has a built in firewall. Which sort of confuses the issue. So its probably best to look at it as two pieces of software, a proxy server and a firewall. All home users, especially those that have a cable or DSL connection, should have a personal firewall installed on their computer. ZoneAlarm is a good product and they have a free version for personal home use. [see review in PC World]


Note: This is a description on the Zone Alarm site.

A firewall is a piece of software that monitors all incoming network traffic and allows in only the connections that are known and trusted. Port 80 is open so that you can browse web pages; port 1863 allows you to engage in instant messaging with friends; port 443 gives access to secure web pages used by online merchants to encrypt purchases.

You could manually grant or restrict access to each of the 65,535 ports available under the Internet Protocol. Every time you add a new program that requires Internet access, you would need to determine which port(s) it uses, and reconfigure your computer accordingly. You've likely got better ways to spend your time.

Firewall software takes on this burden for you, allowing access to the ports you need open, and closing off those you don't. It also makes your computer "invisible" on the Internet; if hackers can't find you, they will have a hard time attacking you.

More advanced firewall software also monitors outgoing traffic. This is crucial since malicious code spreads by accessing the Internet and pushing copies of itself to other computers (often those of your friends and family!). Outbound protection can keep even brand-new Trojan horses and spyware from doing their damaging work. The ultimate protection is program-level control, so that only those applications that you trust are allowed to access the Internet.

Q. Computer viruses are imposing significant costs on business, not only on defending against them, but in lost productivity when one breaks through. What should a company be doing in the way of protection? Is there a check list of best practices that a manager can use to check on the security of his own operation?