SUSE Conversations


From Mainframe to GNU/Linux Thin-Client: The Advantages of this Methodology



By: rrdonovan

July 16, 2008 4:10 pm

Reads:1583

Comments:2

Rating:0

By Rodney Donovan

ABSTRACT: Breadth

The evolution from mainframes and terminals, to the personal computer (PC), fat-clients, to thin-clients asks the question, “Have we come full circle?” The Breadth section of this research analyzes the different aspects of computing as it has evolved from the development of the mainframe and its terminals to today’s Linux thin-client methodologies. With the advent of the PC, the umbilical cord to the mainframe and the minicomputer was cut. This was an exciting time because PCs could emulate the large computers by running programs, displaying graphics, and printing output. PCs pack as much, if not more, processing power than the early mainframes used in business. This has led to inexpensive server-client technology, which has supplanted mainframe technology for most organizational computing needs.

ABSTRACT: Depth

The use of thin-client computing has become a valuable business commodity. As laptops and desktops age or become obsolete, thin-client technology breathes new life into these older systems. As security concerns grow, Linux thin-clients have become the most secure methodology. These two concerns are making inroads in business, governmental, and educational institutions. The preservation of liquid assets (cash) is vital if an organization is to be resilient, stable, and to mature. Thin-client in the enterprise is becoming an attractive alternative to PC peer-to-peer networking. The Depth section discusses how, why, and what benefits are to be had by organizations moving in the thin-client direction. Key people from these entities will espouse their views and experiences with this methodology.

ABSTRACT: Application

A thin-client set up consisting of 1 server and 22 laptops will be designed and set up to service the needs of students in Grades 1 to 5 in a laboratory setting. Students will work on assignments provided by their teachers every Friday for 5 months in the computer lab. All applications used will be from open source software that is Linux based. I hope to accept the null hypothesis that thin-client methodologies provide a more efficient, cost-effective, and secure networking environment than stand-alone PCs.

TABLE OF CONTENTS

Breadth

SBSF 7000: THEORIES OF MAINFRAME TO GNU/LINUX THIN-CLIENT
The evolution of computing has been rather circular in nature. We have gone from the mainframe computer to fully functional personal computers (PC) and networking computer systems, and then to the replacement of mainframes by using client-server methodologies. Mainframes, though scarce and very expensive, are still in use today. Large corporations, research organizations, and governmental entities that require heavy number crunching continue to use mainframes.

Mainframe computers became mainstream in the early 1960s. “The first mainframe vendors were Burroughs, Control Data, GE, Honeywell, IBM, NCR, RCA and Univac, otherwise known as ‘IBM and the Seven Dwarfs’ ” (Answers.com, 2007, n.p.). They were designed to be mission-critical computers capable of service in operations requiring extreme reliability; hence, their high cost. The central processing unit (CPU) was housed in large cabinets that required a room with temperature control. The CPU incorporated many processors that comprised the main processing frame of the computer, depending on their function. The CPU nomenclature was eventually replaced with mainframe.

Access to the mainframe is via dumb terminals that are basically input/output monochrome display devices recently upgraded to color. For the most part, they contain little, if any, memory and no hard drives, floppy drives, or CD-ROMs. Once the mainframe is powered up and the processors booted (i.e., the operating system loaded and fully operable), a network connection to the terminal is provided, and a screen is displayed at the connected terminals. A prompt on the screen provides for data entry/access to/from the mainframe.

During early mainframe design, output was relegated to a printer. These printers contained a print head that traveled back and forth on a rail printing the lines of data. The print head consisted of a vertical line of pins being tapped from behind by miniature solenoids. The print head could be had as a 9 pin for general-purpose printing or a 24-pin print head for higher quality print. As the print head travels back and forth, the pinheads are slapped forward by a solenoid to hit a ribbon covered with oil-based ink, forming characters on the paper. “Dot matrix printers, known also as impact printers, represent the oldest printing technology, are still in widespread use today, grace of its best cost per page ratio” (Mitech.com, 2003, n.p.); see Figure 1).

Figure 1. A printout from a dot matrix printer.

Click to view.

Note. Classical print head mechanism is shown from the left side. The permanent magnet printer head mechanism is at right (Mitech.com, 2003).

Although most processors do the analytics and processing, other processors monitor the condition of the operations. If a problem is found, these processors route operations to another processor. “As a result, mainframes are incredibly reliable with mean time between failure (MTBF) up to 20 years” (Answers.com, 2007, n.p.). Because more processors, subsystems, and peripherals can be added to a mainframe, it is said to be scalable. This is one reason mainframes were popular during their inception. They could grow with a company; scalability was a main selling point in marketing these large and expensive computers.

Another feature is throughput. With the added subsystems, which are computers themselves, the main processors can simply offload any overhead operations to these subsystems. This leaves the main processors to do unencumbered processing. These subsystems drive input/output (IO) operations through channels. Because a mainframe can sustain hundreds of channels, throughput is immense. It is no wonder that a mainframe can service several hundred to several thousand terminal connections. Figure 2 shows a typical mainframe setup.

Figure 2. Mainframe computer methodology.

Click to view.

Because of the high cost of mainframes, which is in the hundreds of thousands of dollars, small corporations opted to purchase minicomputers, scaled-down models of the mainframes, because their was more reasonable. Digital Equipment Corporation was selling PDP 1 minicomputers for a little over $120,000. With the introduction of the PDP 8, new technology had driven prices down to an affordable level. “The first successful minicomputer was Digital Equipment Corporation’s 12-bit PDP-8, which cost from $16,000 upwards when launched in 1964” (Answers.com, 2007, n. p.).

Another selling feature of minicomputers was their size. Mainframes required clean and air-conditioned rooms. Instead of taking up whole rooms, as the mainframes did, the minicomputer took up space the size of a couple of refrigerator cabinets. These minicomputers could still connect to several hundred terminals, which was more than enough for most small companies. Minicomputers continued to prosper well after the advent of the early microcomputers. It was not until the late 1970s and early 1980s, when IBM introduced the 5150 PCs, that the tide began to turn for the “Big Iron.” Figure 3 shows a typical minicomputer set up.

Figure 3. Minicomputer methodology.

Click to view.

These mainframes and minicomputers require a large and expensive infrastructure. With the life span of such hardware, the networking infrastructure must be robust and have the ability to grow with the enterprise. Infrastructure improvements do not make money per se. It does provide for other sections of business to profit from its use. By their very nature, information technology infrastructure investments are large and long term, and they have little or no real value on their own. Infrastructure’s value is in the ability to quickly and economically enable the implementation of new applications often across business units or the firm, which generate business value (Zmud & Price, 2000).

As computer technology evolved from mainframe to PCs, so did the infrastructure. Direct connections from terminals to mainframes using twin axial cable (thick cable) were replaced with a thinner, easier to handle coaxial cable. As PCs entered the scene, this coaxial cable, labeled RG58U, introduced thin cable networking (10base2). This gave the PC the ability to connect to other PCs at 10 megabits per second (mbps). Eventually, coaxial cable using BNC connectors gave way to Ethernet cable (category 5, category 6) using RG 45 connectors.

Cat 5/6 cables are much thinner, provide faster signal throughput due to lower radio frequencies losses, and has expanded bandwidth needed for high-speed networking. Ethernet is the transport system used in conjunction with 10base-T, 100base-T, and 1000base-T. The number in front of base is the speed in mbps. Presently, 10base-T is being phased out because 100base-T is the defacto standard, with 1000base-T, or gigabit, Ethernet gaining popularity. The T stands for twisted wire pairs. The T can be replaced with TX , FX or CX. The letter X after the T denotes that the Ethernet meets more than one standard. FX is for optic fiber cable used instead of twisted pair copper cable. FX or fiber optic is even faster but is limited to present commercial technology. CX is a designation for high-speed twin-axial cable.

As new technology evolved, PCs entered the scene in the mid-1970s. A computer called Scelbi (Scientific, Electronic and Biological) was being marketed in kit form from Computer Consulting Company of Milford. In 1975, the MITS Altair 8800 computer kit became more popular than the Scelbi. Its 8080 processor was capable of running basic programs that could be programmed by the user. “The computer bus designed for the Altair was to become a de facto standard in form of the S-100 bus, and the first programming language for the machine was Microsoft’s founding product, Altair BASIC” (Answers.com, 2007, n. p.).

Although the Scelbi and Altair microcomputer were sold in kits to experimenters, it was not until 1977 that the PC debuted. Radio Shack introduced the TRS-80, Apple Inc. presented the Apple I/II, and Commodore unveiled the PET. It was not until 1981 that IBM entered the PC fray. From 1977 to the mid-1980s, a whole slew of PCs came and went. Atari had several models. Timex Sinclair was by far the smallest PC on the scene. The Osborne 1 was a fully contained and portable PC. The Kaypro built a better portable without the Osborne’s reliability problems. Figure 4 shows some of the early PCs.

Figure 4. Models of early PCs.

Click to view.

These microcomputers, now termed PCs, were priced low enough for the masses that they became home appliances. Just like a telephone and a television, the PC had arrived. Its popularity soared as more software was introduced for these machines. Like the mainframes and minicomputers, these microcomputers had integrated I/O ports. Data storage devices such as tape drives or 5 ¼ floppy drives, dot matrix printers, monitors or televisions, and joysticks could be attached to these early PCs. It was not until the graphical user interface (GUI) debuted that the mouse became common.

The early operating systems were text based. Control Program for Microcomputers (CPM) was an early 64k operating system that preceded Microsoft Disk Operating System (MS DOS). MS DOS became the running standard on PCs using the Intel 8086/8088 processor.

In 1983, Apple Computer Inc. Introduced the Lisa PC. This was the first commercial PC with a GUI. The mouse was used for pointing, selecting, and executing programs. The keyboard could also be used for that purpose. The Lisa’s price was expensive, in the $ 9,600 region. In 1984, Apple introduced the much lower cost Macintosh with Mac OS System 1.0. The GUI had become common. It was not until 1985 that Microsoft introduced its own GUI in Windows 1.0. This GUI ran on top of DOS. This meant that DOS was loaded first, and then Windows. 1996 was the last release of DOS at Version 6.22. Linux is also a text-based operating system utilizing KDE, Gnome, or other OSS GUIs available for it. The operating system loads first, followed by the GUI.

The true innovation of these PCs was that they fit on a desk and were self-contained, small, and portable. There was no need for any special cooling requirements. PCs could also fulfill a dual roll. With the introduction of application software such as WordStar and VisiCalc, PCs became a practical solution at work. With games coming more into the mainstream, PCs provided youth with new activities.

PCs have come a long way since the 1970s and the 1980s. Processors have become extremely powerful. Memory accessibility for these machines has increased from the early 1KB (Kilobyte) to 64 KB to over 4 GB (Gigabyte) of random access memory (RAM). Working with RAM is much faster than using the hard disk as virtual memory. The more memory a computer has, the faster it can manipulate the programs that are running. With more RAM, it holds more data in memory, and it does not have to save and retrieve data to the disk as often.

Prices for memory have become affordable to most anyone owning a PC. Along with this increase in processor power and computing memory, storage devices have flourished. Hard drive capacities from the early 10 MB to 20 MB (Megabytes) have increased to the latest 1 TB (terabyte) drive. With these increases in computing power comes more and more sophisticated programming. As the computer evolves into a more powerful, faster task cruncher, the software introduced is more intricate with features and functions.

It has gotten to the point where new PCs are as fast as, if not faster than, the older mainframes. Presently, dual core processors provide two processing cores on one CPU. This prevent bottle necking of data flow when only one processor is present. If one processor is busy, the data to be processed are passed on to the other processor. This process goes on back and forth until the CPU is done processing. Throughput is tremendous. Multitasking and multi-using become a true reality for processor intensive software.

Although multitasking is being done with just one processor, the CPU for the most part just sits idling while the user does some word processing, downloads an Internet file, and listens to a music CD. To better maximize the use of the processor, peripherals, and collaboration, networking was introduced to the PC. “Originally networks were used to connect only mainframe computers. But with the proliferation of inexpensive computer systems, and advances in software, the need to network personal computers and other computer peripherals became apparent” (Sadiku & Obiozor, 2005, n.p.).

The marriage of telecommunications and computing overshadowed the simple networking being done by mainframes to their terminals. Computers could not only connect to each other in a local network, but to other computers throughout the world. “Computer networks, also known as data com or data-transmission networks, represent a logical result of the evolution of two of the most important scientific and technological branches of modern civilization-computing and telecommunications technologies” (“Evolution of Computer Networks,” n.d.).

Peer to peer, also known as P2P, was an early networking schema that has not only survived but also thrived with new enhancements. This methodology connected PCs together via cables. PC1 could use PC2’s printer, PC3 could save a file in PC1’s shared folder, and PC2 provides an Internet gateway to the other computers on the network. “Peer-to-peer is a communications model in which each party has the same capabilities, and either party can initiate a communication session” (Wolf, 2004, n.p.). Work efficiency is improved, and technology costs are scaled down. Not everybody has to have a printer. They can connect a printer to be used between several people or whole departments. The same can be done with scanners, storage devices, multimedia equipment, and other peripherals.

P2P has evolved from a local area network (LAN) into a wide area network (WAN). This is where computers can communicate with each other through the Internet rather than just locally within a building or a metropolitan area network (MAN), which can span several buildings in different parts of town.

On the Internet, peer-to-peer (referred to as P2P) is a type of transient Internet network that allows a group of computer users with the same networking program to connect with each other and directly access files from one another’s hard drives. (Wolf, 2004, n.p.)

Several P2P topologies are shown in Figure 5. The Bus topology consists of a thick cable usually mounted up high by the ceiling with a transceiver and Ethernet cable dropping down to each network connection. The Ring topology is basically a circular cable where network connections are installed for networking devices. These two topologies are rather slow. About 10 to 16 megabits per second (Mbps) is a good top end.

Figure 5. Common networking topologies.

Click to view.

The Star topology is rather fast. Using a hub that is now obsolete, top speeds were 100 megabits per second for one connection. As more network connections are added, throughput decreases dramatically. This is because like the previous topologies, when one PC asks for data, the other PCs on the network must stop momentarily and listen. If they do not, a collision occurs, and the process must be done again until the PC gets its data packet. When the data packet arrives to the inquiring PC, then it is someone else’s turn.

With the introduction of the switch, which is a smart hub, throughput is tremendously increased. Each connection talks directly to the network on its own channel. Therefore, each connection keeps its data packets to itself. This does not interfere with the other PCs, which can continue to do their own data transmitting and receiving.

Presently, there is a shift from category 5 (cat 5) to category 6 (cat 6) Ethernet cable. The cost is not significantly different between the two. Cat 6 cable provides a better signal to noise ratio and has a higher bandwidth. A better signal-to-noise ratio means that the cable picks up less extraneous electrical noise and has a higher throughput of data than cat 5.

Cat 6 is twelve times less “noisy” than Cat 5e. When your computer sends data across your network some data packets are lost or corrupted along the way. These packets have to be resent by the system. The better the signal to noise ratio is on your network, the less often this happens… As for the testing bandwidth, the official Cat 5e standard calls for testing across a bandwidth of 100 MHz. The Cat 6 standard calls for testing across a bandwidth of 250 MHz. The reality is that most computers and networking equipment only transmit across a frequency range of 100 MHz. In the future, of course, actual utilization of greater bandwidth may become more common. (Hunt, 2005, n.p.)

Although P2P is the most generally used, there is a need for centralized data processing and distributed data processing. This is where client-server comes into play. Like the old mainframes, which internalized all data processing (i.e., all operations and data storage were done on the mainframe) and farmed out to the terminals, powerful PCs were developed and named servers. These new servers have 1 to 4 processors, 1 to 2 power supplies, a self-repairing hard disk array that will continue to function when a drive goes out, and two or more fast Ethernet cards that are set up for double throughput and/or redundancy.

There are two main client setups in a client-server relation. One is fat-client; the other is thin-client. This is not to say that on one server, the client is fat, and the other thin. Rather, it is the amount of processing that is done between server and client. Connections between server and client are through an Ethernet network.

A fat-client setup requires that PCs be used as the client. The server located elsewhere acts as a repository for all processed data. Most of the processing is done on the PCs. Data converted into information is then sent to the server. Once the server receives the information, it is then available to all other clients. “In a client-server architecture, a client … performs the bulk of the data processing operations. The data itself is stored on the server” (Webopedia, 2007, n.p.).
For example, during the day, a bank will process business transactions. Those transactions using cash are posted immediately because cash is mostly liquid. Checks, payments, debit transfers, and other transactions are posted in a batch file, which then runs in the middle of the night. The results are then posted the following day. This is why when you deposit cash, it automatically shows up. Checks and credit transactions may be held for a day or so pending deposits from other banks or commercial institutions. This is an example of a fat client. Batch processing is done on the PCs. When done, the information is sent and stored on the server for later use.

Certain applications can and do run in real time on fat-clients. At a school, student records are updated by teachers using PCs. These attendance records are posted in the school districts database in real time. This is done to secure monies for the district from the state by using a student headcount method. Once posted and the attendance file is complete, the database file is then sent to the state’s educational agency that same morning. The state educational agency intern updates the attendance file by incorporating the data from the various school districts in a batch process. The fat client-server methodology is very popular. Because it uses a PC, it can add programs, do other tasks, and let creativity flow.

The point is that neither you nor I know everything that someone will dowith a personal computer, a.k.a. a fat client. And that is the real benefit of fat clients. They unleash people’s minds to do things no one even knew they could do. (Gabel, 2004, n.p.)

To a large extent, most working environments other than P2P are of the fat-client server nature. The server portion is used when certain high-profile applications need to be accessed. This requires a login onto the server. A certain amount of security is provided while logged in.

The fat-client methodology is somewhat redundant. Depending on the server login system used (e.g., Novell’s Client, Windows Active Directory, Linux Login, or other security systems), if the server fails, the users can still log into workstation mode. This mode bypasses the server so PCs can be used as stand-alone computers. In other words, it becomes a simple P2P-networked PC. Work can still be done on the installed operating system. Word processing, spreadsheet analysis, Internet accessibility, e-mail, and any other stand-alone software loaded on the PC is accessible for the user to utilize. This dual role of the PC makes fat-client server a popular methodology.

Fat clients also offer the least security. While connected to the server, access to certain company information is available to the user. This information can be saved to diskette, flash drive, CD-ROM, other removable media, and to Internet e-mail. This makes fat-client server methodology a security risk. At Texas A&M University, such a breach occurred in the summer of 2007. A professor at the university acquired the student records of several thousand students. These records were loaded into a flash drive. While on vacation, the flash drive was pronounced missing. The professor did not know if he had lost it or if it had been stolen. The problem was that the drive was missing with private student records. Identity theft can very well occur if the information is found by the wrong hands. The same type incident occurred 2 weeks later. The university as usual throws the bulk of the problem back to the students.

The university issued an e-mail to all the supposedly affected students to check with their credit bureaus. That they, the students, were responsible for their own identity theft was an admission of the foolhardiness of those responsible for this occurrence as well as those trying to save institutional face. Were the professors disciplined, punished, or fired? Who gave them permission to use private data? Who gave them access to this information? It is still summer and the answer is, no word yet. There have been policy changes. Faculty/Staff now have to fill out more security forms. Staff are being pushed against the wall with more paperwork, and students must check their own identity welfare, all because of faculty misconduct.

This situation happened because of poor security procedures, a lack of protection for sensitive data, the carte-blanche authority given to faculty professors, and the use of fat-client server methodology. Although fat-clients do have their benefits, security can be a headache if not properly implemented from the ground up. Prior to this mishap, the security policy was wordy and vague. “An effective information security program must have clearly defined objectives that are used both to design the specific controls put in place on the network and to inform users of the behaviors expected of them” (Solomon & Chapple, 2005, n.p.).

Maintenance of fat-client server methodology is more stringent than that for thin-client. Not only does the server require maintenance at certain intervals, but all PCs connected to the server also must be maintained. This can be a curse or a blessing, depending on maintenance procedures used within the organization. IT can either send out the technicians to service company PCs or maintenance procedures can be enlisted for users to follow at certain intervals. This makes maintenance rather chancy. Some users may elect not to follow procedures. Instead, they may just make service calls when simple problems occur due to maintenance neglect. This makes much work for the IT department.

On a resounding note, if the server ever fails or is brought off line for maintenance reasons, clients can still work as fully functional PCs. Other work can continue. The Internet can still be accessed, e-mail can be sent and received, and stand-alone software loaded on the PC can continue to operate. Server downtime can also be a good time for individual PC maintenance. Though sever maintenance is done during scheduled or off peak production times, there is always the possibility that it may go down during normal working hours.

For the most part, required organizational software can be installed on the server. This makes it available to all clients connected on the network. Individual software required by departments can be loaded on just those clients. For example, the engineering department of a company may need a drawing program to produce specifications for its products, whereas the accounting department may need an accounting program for fiscal management.

Thin-client is much different. All processing is done on the server. PCs act as dumb terminals. Of course, one can also use dumb terminals or networking appliances to connect to the server. These clients boot (initialize operating system startup) from the network. In other words, the server is started first. Once the server boots, the clients are powered on. The clients are set in the basic I/O system (bios) to start from the network. This means that when a client is powered on, there is no need for a hard drive with an operating system, or floppy drive, or CD-ROM to boot. With power turned on, the bios tells the network card to activate and look for a boot on the network.

The server seeing the client on the network looking to boot will then provide it with a desktop similar to the one on the server. The user can then run the programs available to him by the server. For the most part, this processing of data is happening in real time, not in batches, where files are stored and then uploaded to replace older files. If, for example, an insurance company is dealing with client information, this information is needed right then and there. The insurance agent cannot wait until the next day every time new information is needed.

The database on the server contains record-locking mechanisms that provide a certain level of security. When an agent is manipulating data from a client, the client’s record is locked so that no one else can access it. Once the agent is done with his data entry, the record is updated and unlocked for the next agent to use. This record-locking feature prevents more than one person from accessing the records in use because the last person making changes will overwrite the previous person’s data entry.

Thin-client is well known, but it is not too often used. Thin-clients are basically dumb terminals, that is, all the terminals do is I/O. When the client boots, the server provides it with a desktop. The user can run only the applications that are available from the server. Herein lies the crux of thin-clients. The server administrator, or anyone with the root password, is the only one who can load software on the server. No software is loaded on the thin-clients. The server administrator can set periods of operation for the clients. It does not matter if the user arrives at work early to get a head start or wants to work late to finish up: The thin-client will only operate during the times set by the server administrator or whenever the server is powered on.

Depending on the thin-client setup, security is also enhanced. Under the Linux operating system, clients can be denied the use of USB ports, floppy drives, and CD/DVD-ROMs residing on their clients. E-mail can be set so that it is internalized. Employees of a company can send e-mails only to each other because of restrictions on Internet usage. If an employee must use his flash drive to save data, he must make it available to the server administrator to insert it in the server itself. The same applies to the floppy and any peripherals connected to the server. If the user wants to burn a CD, he must have your permission settings set by the server administrator, and let him/her load the CD on the server drive.

Look through any organization, and you will see how many employees just don’t need a full-function PC. That left the typical corporation with thousands of desktops with disks full of Windows utilities and a full load of Microsoft Office. Throw in a couple of in-house applications, and you have an overpriced, under-used virus catcher just waiting to die from planned obsolescence. (Connolly, 2004, n.p.)

Thin-client server setups are also inexpensive compared to the fat-client server methodology. Savings come in the form of reduced hardware and software costs. Running OpenSuSE’s Linux Terminal Server Project (LTSP) software, one server can service up to a little over 60 clients by connecting or concatenating three 24-port switches together. With the server connected to the middle switch, and the clients connected to the rest of the ports on the switches, high throughput is easily achieved (see Figure 6).

Figure 6. Client-server methodology.

Click to view.

Early on, Linux could sustain 3,000 clients. Because of memory and processor limitations of the PCs of the late 1990s and early 2000s, it was reduced to 1,000 users on the main Linux distributions. Linux is a free derivative of the Unix operating system used in mainframe operations that can have thousands of simultaneous connections and users.

This brings us to the present PCs sporting dual-core (two processors on a chip) processors, and increased RAM capacity from the standard 512 Mb toover 4 Gb. By the end of summer 2007, quad core processors were available to consumers. With the introduction of 4 processors per chip, computing power will make thin-client computing even more viable, with the capability to easily serve several hundred if not thousands of clients. The 1,000-client limit on Linux operating systems can be readjusted to a higher limit in the source code if need be.

That is the uniqueness and greatness of open source software (OSS). It is free to use, change, and distribute. Users have the assistance of programmers, users, and system maintainers throughout the world who will help to better improve product usability. Users can do with it as they please, provided that they give acknowledgment where acknowledgment is due and do not sell, or profit monetarily from it. That is the GNU General Public License.

When a problem is encountered with commercial software, users either wait for an update to the software or purchase the next version upon release. OSS is different. Utilizing bug reports, chatrooms, and e-mail, one can contact the software team and alert them of the problem. Within days, a fix is found and posted on the patch-and-update download list. It makes sense to check for new updates often because they are free. Patches/Updates will keep operating systems and software up to date. These changes increase the stability of operating systems by fortifying them against external attack. It also lessens the chance of data being compromised or corrupted. “The most basic step in keeping up with emerging threats is to ensure your operating system and software are up-to-date” (Solomon & Chapple, 2005, n.p.).

If thin-client server methodology is a good fit for an enterprise, using it within the work sphere will save many headaches. Patches and updates are loaded into one server rather than several hundred or thousand PCs. The same can be said if a large organization requires the use of new software. Installing software one PC at a time is not cost effective; rather, it is inefficient and time consuming. Licensing fees can be saved, depending on the wording of the license. Although the software is in use by many, it is loaded on one server.

The use of Linux thin-clients saves countless hours in maintenance time and repairs costs. A systems administrator need not worry about the status of each client computer, terminal, or appliance on the network. With thin-client technology, each client can be serviced at an appropriate time. Clients can be added, replaced, or removed without disturbing the general work flow consensus.

Server equipment can be modified with a minimal amount of time and resources. Only the servers are updated during their maintenance period. Hardware can be updated, software can be installed, and a general cleaning can be done in short order. Hard drive maintenance can be time consuming under Windows®. Because a RAID Array consists of two or more drives, defragging needs to be done during extended periods of downtime, such as after hours or on weekends.

This is a nonissue with Linux. Because of the Reiser file system or Ext2-3 file systems, files are automatically optimized during saves. Defragging is not required with this operating system. This saves time and keeps the server drives in an optimized state. Wear and tear on the hard drives also are reduced. Linux file systems also contain journaling so that if power is lost, the file portion of the operating system is reproduced from its ongoing backup. Lastly, the file system is checked periodically, usually after an automatic or an administrator-set number of restarts. The program “fsck” (file system check) also can be run on the fly during a maintenance period. It is not recommended that it be done while the server is in operation because of the hard drive having to be dismounted. Once dismounted, there is no user access to the drive.

Another main issue on Windows® are viruses, trojans, spyware, and so on. MS-operating systems have always been magnets for this type of malware. This problem is reduced under the Linux operating system. Viruses, trojans, and so on, target files because of their extensions. Malware knows that files ending with .exe, .com, and .dll are executable programs. Once malware breaches a PC and is loaded in memory, any executable program run is then contaminated. The user can clear the malware from memory, but the next time the contaminated program is run, it loads into memory to contaminate other programs.

In Linux, the problem is reduced. There is no need for executable program extensions. Therefore, the malware has little, if any, effect. Users also log on as users. This means that even if a malware program tries to do damage to the system, it cannot affect system services because the user does not have root permissions. This is another safety inherent to Linux.

There is a heavy caveat on the thin-client server methodology. If the server fails to function or becomes inoperable, the whole network is down until such time the server is repaired or brought back online. There have been recent improvements in this area though. The use of two terminal servers instead of one will provide for load balancing and redundancy. While both servers are in operation, they share the load, yet the same information is written to both servers. If one server goes down, the second server will take up the slack. This By using this methodology of thin-client server redundancy, maintenance can occur with any server at any time. Network operations will continue to run as long as one server is operable. Not counting on catastrophic failure to both servers, this is a win-win situation for any business, school, or government entities.

Conclusion

We have come full circle. We started at Point A and went beyond Point B, only to find out that we had gone too far. We return to the beginning, knowing that point A was not so bad after all. Computing has evolved from mainframes and their terminals, to stand-alone PCs, to client server methodologies. In client-server methodologies, the server tends to be a high-end PC. The server does just that: It serves its clients. Clients can be PCs, laptops, or other devices. Between fat-client and thin-client philosophies, the thin-client server methodology provides more control/security, less hardware/software cost, much lower maintenance, and a company-centric approach to computing.

DEPTH

SBSF 7000: CURRENT RESEARCH IN GNU/LINUX THIN-CLIENT METHODOLOGIES


Annotated Bibliography
Acohido, B. (2003). Linux took on Microsoft, and won big in Munich. Retrieved from http://www.usatoday.com/money/industries/technology/2003-07-13-microsoft- linux-munich_x.htm

Can Linux compete internationally against Microsoft Corporation? The city of Munich, Germany, has replaced Microsoft products with SuSE Linux and open source software. The city of Munich, Germany, is progressing with the use of OSS. By replacing Microsoft’s products with OSS, the city has complete control of updates, software, and hardware changes. It is not locked into any one product, operating system, or platform. This provides for true savings and cost reductions as time progresses. This article ends the myth that only Microsoft Windows products are viable in the government sector. Because opens source software is written in many languages, it has become a viable candidate for organizations wishing to cast off the closed or commercial software chains.

Chang, P., & Kalil, C. (1977). Linux means business to the city of Garden Grove. Retrieved from http://www.linuxjournal.com/article/218

With so many printers and users on the network, what is the faster and cheaper way to go with network printing? The city of Garden Grove, California, set up printers to a thin-client server and gauged client performance compared to a Windows NT and SCO setup. The use of Linux thin-client methodology proved to be a faster, more stable, and error-free printing solution because the server queues incoming files directly. The Linux set up was quicker to set up and maintain. Depending on the setup, users have a choice of which printer to use, depending on the printers connected to the server. With dual core and quad core processors on the market, the bottleneck at the printer should not occur. This is because a good deal of printers on the market use the computer’s processor to process the data into a printer-usable form. The use of a Linux server with thin-clients can make printing operations a self-maintained breeze.

Danen, V. (2003). Get users diskless with Linux thin client – ZDNet UK. Retrieved from http://news.zdnet.co.uk/hardware/0,1000000091.2132841,00.htm

How does one boot a computer without an operating system and a hard drive? One can use research bootup schemes in the Linux Terminal Server Project (LTSP). Computers can be booted by using a preconfigured diskette or from the network by selecting the Preboot Execution Environment (PXE) boot from the computer bios boot selection menu. Most PCs now have this option available for configuration. Older PCs may not have this option. In that case, one can run a preconfigured diskette that will allow the PC to access the network card search the network for an operating system. The thin-client server will see the PC searching for an operating system and provide it with a desktop to work with. Using thin-client server methodologies and OSS such as Linux and LTSP can provide years of additional life on older, obsolete computers.

Eicht, M. P., & Rosen, J. (2004). Real world case study: Linux thin client savings exceed 37% in just 8 months. Retrieved from http://www.desktoplinux.com/articles/AT7753498575. html

How can a medical cardiology practice standardize operations in six offices connected to eight hospitals serving more than 40,000 patients? It commissioned a feasibility study to analyze overall concepts of hardware, software database, communications, and security requirements. Using Linux thin-client methodologies realized savings in short- and long-term costs associated with licensing fees from Microsoft and programming costs associated with commercial software. Security was enhanced with the use of thin-client appliances replacing standard PCs. Internet communications simplified database collaboration and made patient records readily and securely accessible to medical personnel.

What was once a lucrative market for commercial software companies has now become a cost-saving tool for OSS integrators. There will always be some associated costs in migrating to OSS, such as consultant fees and knowledge-based fees. There is usually a one-time setup fee.

Hargadon, S. (2005). Linux thin client: How it works, benefits, & drawbacks. Retrieved from http://www.stevehargadon.com/ 2005/10/linux-thin-client-how-it works.html

What are the benefits and drawbacks of Linux thin-clients? Linux thin-client server methodologies happen to be stable and reliable multiplatform solutions to networked computing problems. They decrease maintenance, control computer usage, and deflect viruses and other malware. Because all work is done on the server, thin-client users can log into any client computer. There are also no licensing fees or update costs because this type of software is free and open source. On the down side, it does not run programs that run on Microsoft Windows operating systems. Some video intensive programs will not run under a thin-client environment. Although Linux thin-client methodologies have many advantages, they must be weighed against the necessitated use of certain commercial software or Linux video intensive programs.

Harris, M. (2005. LTSP, down by the sea: A 20-terminal Linux cyber tent for education.
­­ Retrieved from http://flakey.info/hesfes05/

What is a quick and efficient way to set up a networking scheme for an educational fair?

Harris investigate the most efficient setup of a 20-user wireless network. The LTPS provides add-on programs to Linux to make a complete thin-client server package. This methodology provided 20 low-powered PCs acting as clients to furnish Internet access to the Home Educators’ Seaside Festival in England. Bristol Wireless and Psand.net provided the Internet connectivity to the server and clients. Older and slower computer equipment can be used to build a thin-client server network at much lower costs than upgrading to newer and more expensive equipment.

Kucharik, A. (2004). Thin Linux clients deliver Internet to library patrons. Retrieved from http://searchenterpriselinux.techtarget.com/originalContent/0,289142,sid39gci968183,00. html

With shrinking budgets, what can a library do to expand its technology budget? The Otis Library in Norwich, Connecticut, invested in Linux thin-client methodologies to provide more Internet connections, productivity software, and services to their clients. The use of thin-client technology and OSS allowed the library to more than triple the number of computer workstations available to library patrons. The budget furnished enough money to buy a good server. This server, coupled with LTSP OSS and older PCs, produced a network that provided Internet access to the patrons and made life easier for library staff. Older computers should not be discarded simply because they will not run the newest operating system from Microsoft. Users should give Linux thin-client server a try.

Lettice, J. (2003). Linux in Munich – Gartner gets retaliation in prematurely? Retrieved from http://www.theregister.co.uk/2003/07/22/linux_in_munich_gartner_gets/

Will Munich, Germany, migrate from Microsoft’s Window environment to Linux and OSS? The Gartner Group Inc. examined the costs of migrating from Microsoft’s operating systems and office suite to that of Linux and OSS. The cost to migrate from Microsoft products to Linux and OSS approximated $30 million versus $27 million to upgrade the prevalent Microsoft products. The City of Munich, Germany, sees more upfront costs in the migration, yet an overall savings in the long term is easily obtainable. Upgrading Microsoft software my indeed be less bothersome, but it will be more costly as time progresses.

Loftus, J. (2007). Microsoft Windows ousted at California school district. Retrieved from http://searchenterpriselinux.techtarget.com/originalContent/0,289142,sid39gci1245710, 00.html

What can replace Windsor Unified School District’s (USD) aging servers and Microsoft’s Windows environment? Windsor USD hired Heather Carver, who was adept at open source solutions as the director of technology and information services. Upgrading to newer hardware and purchasing new licenses would cost over $100,000 to a school district’s IT department that could ill afford it. By servicing the servers and moving to Linux thin-client methodologies, she was able to save the district a good deal of capital while at the same time raising the technological plateau of the schools. The transition was smooth for the most part. As any migration of technology, the few bumps encountered on the road to open source changeover was quickly remedied. OSS and the GNU/Linux operating system provided Windsor USD with a savings in technological costs and an advancement in the technological state of the district.

Miller, R. (2002). Largo loves Linux more than ever. Retrieved from http://www.linux.com/ articles/26827?tid=37

How can the city of Largo’s government save costs in IT and provide its residents with better service? Largo decided to review Linux thin-client server methodologies. After accepting the use of Linux thin-client methodologies, the city has been provided with much lower IT costs. It also has reduced departmental maintenance overhead by using thin-client appliances purchased on eBay. The thin-clients are solid state in nature, with no moving parts. With a life expectancy of over 8 years, these appliances can be used and purchased in bulk for pennies on the dollar from technology wholesalers. With a little planning, used hardware can be bought cheaply. Linux software is free. This produces much more computing “bang for the buck” than having to purchase both new hardware and proprietary software.

Pacific Northwest Software. (2005). Case study: United States Postal Service. Retrieved from http://www.pnwsoft.com/index.aspx?page=cs/usps

What hardware and software components utilized with a specific methodology will cut costs and increase production in mail sorting operations? A study was taken to organize hardware, software, and methods of accomplishing the task of building a faster, better, and more reliable mail reader. The use of off-the-shelf components for hardware construction and free Unix/Linux-based software provides opportunities to develop a much better system in terms of reliability, lower costs, and operational manageability. An organization no longer has to run proprietary hardware and software to meet the strategic goals of a company. This translates to savings in capital and resources. It also allow changes in design and upgrades without expensive investment in capital.

Pladgeman, M. (2007). Thin client computing without “the bill.” Retrieved from http://www. bosanova.net/thinclientbill.html

Can corporate America replace Windows-based thin-clients with Linux-based thin-client server methodologies? A comparison of software costs between commercial and open source thin-client methods shows that the total software cost of a Windows thin-client environment runs about $545 to $795. These costs include $120 for a Terminal Server Client Access License; $30 for per client access; $395 for Microsoft’s Office Professional; and, if needed for remote communications, Citrix at $250. In a Linux thin-client setup, software costs are $0. The operating system is free. The thin-client software is free. OpenOffice used as the office suite is free. Corporate America is exploring the trend of migrating to Linux thin-client methodologies as ways to conserver resources and scale down IT costs.

Rais, M. (2005). Linux in business: The desktop is dying. Retrieved from http://www.reallylinux. com/docs/ltsp.shtml

Will thin-client server computing replace the corporate desktop? One must examine the pros and cons of thin-client methodologies over standardized desktops in corporate America. Because of inherent weaknesses in security, fair virus protection at best, escalating software costs, and hardware obsolescence, thin-client server methodologies using OSS and LTSP are increasingly seen as not just a much better value where costs are concerned but also as a new, more efficient desktop workstation environment. Replacing stand-alone PCs with thin-client server technology will enhance productivity, conserve monetary resources, and provide a secure workplace. Thin-client server technology can easily replace the standard stand-alone desktop and also provide benefits.

Scheeres, J. (2001). Mexico City says hola to Linux. Retrieved from http://www.wired.com/ politics/law/news/2001/03/42456

How can a developing country increase technological innovation on a limited budget? One must review the statistics on commercial software versus OSS. The City of Mexico’s technical coordinator, José Barberán, and his team will migrate the city’s computers to the Linux operating system. This project will take about 2 years to complete. The city will save millions in software costs that can be better used in the city’s social welfare programs. If a developing country can save millions of dollars in software costs by simply switching over to Linux, surely a developed nation can do the same.

Vaughan-Nichols, S. J. (2007). HP to buy Linux thin client desktop company. Retrieved from http://www.desktoplinux.com/news/NS7988082612.html

Why would a PC manufacturer buy a thin-client manufacturer? Vaughan-Nichols questioned Hewlett-Packard’s (HP) decision to buy a competing non-PC-manufacturing company. The increasing use of Linux and other thin-client methodologies opens other avenues of success in the computing market. By providing thin-client appliances to organizations, HP can play both sides of the fence. It can sell servers for the networks and thin-client appliances to complement the thin-client server methodology. Its older corporate equipment in use can also be converted to thin-client. This gives HP a good deal of leverage in the computing field. PC manufacturers are realizing the strengths of thin-client computing, especially with Linux based solutions being increasingly implemented by business, government entities, and public institutions.

Literature Review Essay

In the Breadth section of this study, I commented on the evolution of computing from the early mainframes, to standalone PCs, to client-server methodologies. There are many benefits to thin-client server methodologies. Depending on the use or purpose of the equipment, network client-server methodologies will provide a wealth of benefits in reduced hardware costs. Software costs are minimized with the use of OSS. Indeed, in most organizations, the benefits of thin-client server computing using OSS have been shown to positively affect the bottom line and increase security.

What are some benefits derived from using a server methodology? We can start with speed. If a Linux server serves 60 clients that are used for word processing, Internet browsing, and so on, getting all of the clients loaded is done quickly. When the first client loads the word processor, much of the program is installed in memory. When the next client accesses the word-processing program, the code already loaded is quickly dispatched to the next client. “The second user of a program on a thin-client Linux network is generally able to start the program faster than the first user because the code is already loaded into memory” (Hargadon, 2005, n.p.).

The operational speed of a server is generally high because of the increased RAM storage in the server. Standard PCs start with 512 Mb to 1 Gb of RAM. Servers, on the other hand, start with 2 to 4 Gb of memory. More RAM equates to more program code that can be store in memory. Because RAM is much faster than having to load program code or data from a hard drive, the clients benefit from the direct network connection to that memory pool. Of course, with the addition of newly introduced switch technology, speeds have jumped from the old 10 Megabits per second (Mbps) hubs, to 100-Mbps switches, to the latest, 1000 Mbps, or 1 Gbps, switches using category 6 Ethernet cable.

Security is very strong in a Linux thin-client server platform. For the most part, users cannot install software on their clients. The client PC, or network appliance, does not have any long-term storage devices such as a hard drive. The use of devices on a thin-server is controlled by the server administrator. For the most part, hard drives, floppy drives, USB drives, CD-ROMS, and so on, on a client are not functional because all the processing is being done on the server. To save on a floppy or a CD-ROM, the user must physically insert the floppy or the CD-ROM into the server itself. “A user is not able to load pirated or problematic software on an individual machine, and only those programs which the school wants students to be using are loaded on the server and thereby are available on the workstations” (Hargadon, 2005, n.p.).

Maintenance costs are greatly reduced. All software resides on the server; all storage devices reside on the server. This makes updates, downloads, and the physical cleaning of one machine quite manageable. One should try loading software on 60 PCs. The time element for software loads is very large, not to mention the maintenance of all those PCs. Defragging and updating 60 PCs would require an incredible amount of time, not to mention doing virus scans and fixing the different anomalies of each machine, which provide for high maintenance costs.

No virus or spyware vulnerability. Linux has been built from the ground up with security in mind, and like the Apple Macintosh (which is based on Linux’s cousin, Unix), it is significantly immune to the viruses and spyware that typically plague personal computers. (Hargadon, 2005, n.p.)

It is much easier to manage just one server than 60 PCs. This is a common-sense scenario. Not only are maintenance costs reduced but so, too, is the total cost of ownership. “Corporate management has come to realize that the PC revolution has its drawbacks. these drawbacks include higher maintenance costs, lower employee performance, and system vulnerability. the hardware becomes obsolete and higher performance CPUs are required” (Pladgeman, 2007, n.p.).

Because one can use either PCs, laptops, or thin-client appliances, how does Linux make the claim that this client hardware can be started without a hard drive containing an operating system? Once power is provided to a computer, it seeks to boot onto an operating system. Most operating systems currently are installed on a hard drive located inside the computer. If this hard drive was to be damaged so as not to boot, it would render the computer useless. Without the operating system, the computer would do nothing. It would just be a large door stop per se, or an inefficient small boat anchor.

There is a boot selection menu in the computer’s setup bios that offers different boot-up parameters. One can set the computer to boot from the floppy diskette, CD-ROM, hard drive, USB device, removable device, or Preboot Execution Environment (PXE). Depending on the make and model of the PC, there may be other menu options. The PXE option sets up the computer to boot from the network. If this option is not available in the setup boot menu, there are other ways to boot from the network.

Older PCs may not have this option. In that case, one can run a preconfigured diskette that will allow the PC to access the network card and search the network for an operating system. The thin-client server will see the PC searching for an operating system and provide it with a desktop to work with. “Go to the ROM-o-matic site to select the network card your workstation uses…To generate the boot floppy, insert a blank floppy and use the dd if=eb-5.0.4-rt18139.lzdsk of=/dev/fd0 command (changing the filename to match the file you downloaded)” (Danen, 2003, n.p.).

This Web site, www.rom-0-matic.net/5.0.4/, will generate a network boot floppy by writing a boot image on the diskette. This image corresponds to the type of network card used on the client. Once written, the diskette can be taken to the PC that is to be used as a client and booted up. At boot, the network card is activated. It will search for a server to boot from. The thin-client server will acknowledge the client and provide a desktop for it.

A company moving into thin-client methodologies can expect to see a difference in costs compared to commercial software. Software costs for a traditional commercial thin-client server runs approximately $545 to $795. This includes Terminal Server License at $120; client license at $30; Microsoft’s Office Professional $395; and if needed, a Citrix remote connection license for $250. On the other hand, a standard Linux thin-client server software costs $0. The Linux Thin-client Server Project software is free. The Linux operating system is free. The use of OpenOffice is free. Linux thin-client server methodologies are a viable alternative to commercial thin-client methodologies and simply networked computers.

When you figure in the licensing fees; the time spent by system administrators updating each workstation, not to mention installing and maintaining anti-virus software; the downtime this causes other employees while their workstations are out of commission; and the additional time required by employees to learn each new revision; the resulting price tag is quite sizable! (Pladgeman, 2007, n.p.)

With all the positives and few negatives, it is no wonder that Linux thin-client is a growing concern. Granted, if you have to run a Windows application for which there is no Linux alternative, one can always keep some of those PCs running Windows. For computer labs, student records, online class courses, the need for an office suite, or a secure, low maintenance network, Linux will more than handle the task. The enterprise use of Linux, such as file servers, Web servers, domain servers, and authentication servers, also is a viable substitute for the different operating systems from Microsoft.

The return on investment is so good that business, government agencies, and educational institutions have migrated and converted to the Linux area of influence. Following is an exploration into the reasons organizations are moving to Linux operating systems and Linux thin-client methodologies.
Capitol Cardiology Associates (CCA), a cardiology practice based in Albany, New York, migrated to Linux thin-client in 2003. This company comprises more than 40 doctors, surgeons, and providers. It operates in seven offices and seven hospitals located in New York and Massachusetts.

CCA consists of over 40 physicians, surgeons & providers, practicing in 7offices and 7 hospitals in New York and Mass., employing approx. 200 employees. In 2003 activity stats were: 128,000 patient visits (office & hospitals), 92,000 diagnostic tests, 6,000 catheterizations & interventions, 800 open heart surgeries, over 380,000 billed services with yearly revenue over $22 million. (Eicht & Rosen, 2004, n.p.)

A major business goal for CCA at the time was to standardize its intermingled environment consisting of different Windows operating systems to include 95, 98, and 2000.

Connectivity between offices and hospitals would prove pivotal in choosing Linux thin-client methodologies.

These Linux thin-client methodologies provided fast interconnectivity with-point to-point, or broadband, Internet where available. Files can be shared across networks and clients with minimum add-on costs. IT support is readily available using integrated remote desktop access. The Linux network’s high resistance to virus, trojans, and malware makes it a stable and reliable environment to work in. A main feature in this type of environment is the restriction of hardware and software items. Client hardware used for Internet game playing, playing of a music CD, or use of a flash drive/CD-ROM burner/diskette to copy confidential company information, can all be controlled.

Based on previous experiences with Windows 95/98 desktops, user abuse of non-business functions (e.g. solitaire, music files, Internet, etc.) constituted a significant loss of employee productivity. Estimating that if the average employee spends 15 minutes per day “playing with the computer” he/she would waste greater than 50 hours per year. At $15.00 per hour that translates into $750.00 per employee each year. For 200 employees the productivity loss could amount to $150,000.00 per year. (Eicht & Rosen, 2004, n.p.)

Employees required little or no training in using OSS applications. Open Office, a free office software suite that runs on different operating systems and platforms, required a few hours to become comfortably familiar with it. The same is true for the Linux operating system. Although the desktop icons are different in color and appearance, they basically perform the same functions as their Windows counterparts. There were some caveats to the methodologies used. Certain software such as accounting and human resource programs were kept on Windows-based computers. They will be replaced in short order as suitable substitutes are found.

Proprietary software (accounting, human resources) initially did not adapt well due to lack of vendor support. Thus, subsequently we bypassed the issue by keeping these functions segregated on a dedicated Windows boxes. Efforts are currently under way to remedy this problem with Windows emulation software such as Win4Lin by NetTraverse. (Eicht & Rosen, 2004, n.p.)

CCA is more than happy with the migration. All expectations have been met, and others have been exceeded. “Network stability has been phenomenal…Desktop maintenance has been outstanding… We had estimated the yearly operating costs to be 37% less with Linux thin client. These saving appear to materialize nicely and will likely exceed them in the future” (Eicht & Rosen, 2004, n.p.).
Many public libraries are under financial constraints. For the most part, public libraries are departments in a city’s infrastructure; as such, they must compete for monies with other departments to allocate funds for their expenditures. IT expenditures are affected in this area. New computers cannot be bought every year, or every several years. This leaves libraries with older equipment to satisfy the needs of the patrons. With the fluctuating costs of infrastructure, Internet access, software, and hardware, the use of Linux thin client is getting more attention.

In 2004, the Otis Library in Norwich, Connecticut, invested in Linux thin-client methodologies to provide more Internet connections, productivity software, and services to their clients. The use of thin-client technology and OSS allowed the library to more than triple the number of computer workstations available to library patrons. The budget furnished enough money to buy a good server. This server, coupled with LTSP OSS and older PCs, produced a network that provided Internet access to the patrons and made life easier for library staff.

The benefit of this arrangement is twofold: Not only did it allow the library to deliver much-needed Internet access to its patrons inexpensively; it also made librarians’ lives easier…librarians “don’t like to be PC cops” and tend to be introverted and non-confrontational. The automated system allows librarians to keep their noses in their books. (Kucharik, 2004, n.p.)

The library found that for the one-time cost of a good PC to be used as a server, they could refurbish their used equipment for a thin-client network. As time goes on, the only hardware changes to be done would be to improve server performance. Granted, PCs do die out. They can simply be replaced by used laptops or even inexpensive network appliances costing about $200 each. Keyboards and mice can be bought for about $19 a set, and a standard CRT monitor can be had for as little at $50. Better yet, used and refurbished equipment can be bought from wholesalers for pennies on the dollar. After all, they are just clients. It is the server that does most of the work.

The U.S. Postal Service has begun integrating OSS into its main sorting operations. “The United States Postal Service has over 900 computer systems, each consisting of eight Linux machines, that are used to convert mail piece images into the destination address text” (Pacific Northwest Software, 2005, n.p.). This setup consists of a master computer and seven slave computers. These computers are used for optical character recognition.

The master computer oversees the operation of the slave computers and does no character recognition per se. “Its job is to provide all the other computers with everything they need to operate and provide a user interface for the whole system” (Pacific Northwest Software, 2005, n.p.). This system integrates into a postal service network connected to a group of mail sorters that try to decipher the addressing of the mail piece. Each sorter has a camera that takes a picture of each mail piece as it passes by. The master computer directs the image to the Linux-run slaves. The slave computers contain the character recognition software and algorithms. Only one slave deciphers the image. Once deciphered, the mail piece goes into the sorter’s address, zip, city, state, or country bin.

This setup will provide a cost-effective system for the postal system’s mail-sorting operations for years to come. The Linux operating system is free and easily updateable. As time progresses, older PCs can simply be replaced by newer, more powerful ones. As more processing cores per chip increases, fewer slaves will need to be used.

The Windsor Unified School District located in northern California hired Heather Carver as its IT director. The district was experiencing difficulties in managing its IT infrastructure. Carver analyzed ways to keep using Windows as the operating system and the presently used software. Carver stated, “We looked at keeping the physical environment, and how we could accomplish that. But in that scenario, if we could afford the software upgrades, then we could not afford the new hardware required to run it and vice versa” (as cited in Loftus, 2007, n.p. ).

Costs for new hardware and licensing options would cost approximately $100,000. This was more than the district could bear for its seven schools. Using Linux thin-client methodologies, the district was able to save half the cost of setting up a regular computer lab using commercial software solutions. “[The students] are able to do more because Linux cost less,” Carver said. “Our new computer lab [at Brooks] was set to cost $35,000 and ended up costing us $16,000 with Linux [on thin clients]” (ac cited in Loftus, 2007, n.p.).

Each of the seven schools has a server room with 10 interconnected servers. These servers provide thin-client, printer, application, and collaboration support for the students, staff, and faculty of the school. Not only did it prove less expensive to migrate to Linux but it also proved easier to manage. “Implementing ZENworks as a help desk for teachers and staff has resulted in a 90% reduction in the amount of time it takes IT staff to resolve problems” (Loftus, 2007). The migration from commercial software to OSS in the school district proved to be quite smooth. The district is quite happy at the results and the kids think Linux is quite “cool.”

The City of Largo, Florida, true to its motto, “City of Progress,” has shifted toward Linux. The system administrators have found that the use of Linux thin-client methodologies can have a true money-saving impact on the city. By buying thin-client appliances on the Web, especially on eBay, the city has been able to save significant sums of money to structure its IT. The best part is that the cost of refurbished hardware can also be had at good prices. The only requirement for a thin-client appliance, or PC, is a monitor, keyboard, mouse, and a connection to the server.

Finding a whole bunch of the NCD thin clients they prefer — which sell new for around $750 — on eBay for prices ranging from 50 cents to $5. No, they aren’t the latest model, but who cares? These things have no moving parts; the super-cheap used ones are more than adequate to run a KDE desktop and all the apps a typical city employee needs; and with a 10 year expected life it doesn’t matter if they’re a few years old. (Miller, 2002, n.p.)

The use of Linux thin-client methodologies have saved the city of Largo money, maintenance costs, and troubleshooting time. Cities worldwide are looking toward the future for a less complicated and more manageable IT networking solution.

In Munich, Germany, the Gartner Group Inc. examined the costs of migrating from Microsoft’s operating systems and office suite to that of Linux and OSS. The cost to migrate from Microsoft products to Linux and OSS will cost approximately $30 million versus $27 million to upgrade the prevalent Microsoft products. Munich sees more upfront costs in the migration, yet an over all savings in the long term is easily obtainable.

Gartner does not say outright that it thinks the Munich switch will turn out to be a costly failure, but it seems to question the move in terms both of cost and methodology. The migration, it says, will cost around €30 million, whereas an upgrade to Windows would have cost €27 million, excluding the extra discounts from Microsoft which Munich spurned. Alongside this, Gartner claims that “many applications will not migrate to Linux’ but will be run either as thin client systems or ‘using virtual machine software, such as VMware.” (Lattice, 2003, n.p.)

Microsoft Corporation believed this to be a threat to their software dominance in the German market. SuSE Linux, a Munich Linux distributor, offered the Munich City Council an alternative to Windows-based computing. By migrating over to Linux with the company’s help, Munich would reap good profits in the long run. It also would be able to refurbish and continue using its older equipment. Upgrades are free. Once the Linux operating system is installed, upgrading to a newer version of Linux is relatively painless. Yet with some positive Linux solutions to migration problems, Gartner believed that retaining Microsoft Windows will be more cost effective:

Gartner claims that Munich currently uses “many older versions of Windows,” observing that this class of migration is easier to cost-justify, but wasn’t that a point we just saw galloping by, apparently unnoticed by the Gartner analysts? If you are currently using Windows 9x or even 3.1, then you must surely be thinking about upgrading. If you were to upgrade to WinXP, then it will be more costly and difficult for you should you later decide to migrate somewhere else. So a cheaper Windows deal now is only cheaper if you accept lock-in, and rule out migration for a period of years. (Lattice, 2003, n.p.)

Munich decided against Microsoft’s lower bid. Locking in the entire city for several years would encumber it to update software and hardware, and it would be costly when it is time to upgrade to a newer version of Windows. The city of Munich, Germany, is progressing with the use of OSS.
By replacing Microsoft’s products with open source software, the city has complete control on updates, software, and hardware changes. They are not locked into any one product, operating system, or platform. This provides for true savings and cost reductions as time progresses. (Acohido, 2003, n.p.)

The use of Linux thin-client server methodologies would enhance departmental operations. A thin-client server can support up to 1,000 clients as a standard. Depending on the server, Linux can be reprogrammed to handle up to 3,000 clients. Because Linux was patterned after Unix, the larger number was the early standard. Because of hardware constraints in processor technology in PCs at distribution time, the standard was lowered to a more reasonable 1,000 clients.

An operational concern is the maintenance of departmental computers. Utilizing thin-client methodologies, it is the server that would require periodic updates and maintenance. The clients for the most part need only clean the mouse, screen, and keyboard. If client appliances are used, the yearly dusting of the client PC can be negated.

A client appliance is actually a low-powered computer requiring no fans. It has a small desktop footprint, usually the size of a small book. For the most part, only keyboard, mouse, screen, and network connections are contained within the appliance. The cost of a simple client appliance has dropped to less than $200. The more costly ones do contain USB, printer, and sound connections. For the most part, these clients are maintenance free. Being of solid state construction, they can be left on without any degradation in performance. The design of this hardware insures years of trouble-free operation at minimal costs. There are no moving parts to wear out. Like everything else, hardware does fail. After several years of operation, if the device fails, it can easily be replaced. Because prices are low, several replacement spares can be kept on hand.

The operational savings in maintenance costs per department would be tremendous. No longer would every PC have to be updated. Only the server is updated at a prescribed time. Updates to the servers would be automatically done during nonworking business hours. This is one of the many reasons Munich chose Linux over Windows.

When setting up networking with Microsoft software, options are limited. Using Linux provides varied and configurable ways to meet objectives without the constraints of commercial software. Therein lies the crux of the problem. When a new operating system is announced from Microsoft, not only does one have to upgrade to the new one, but many times, one also has to update existing software to a newer version. This in itself keeps one locked into the commercial software loop. The costs associated with this involve software as well as hardware.

When Windows 95 appeared, it could easily run with 40 to 50 MB of disk space and 32 MB of RAM. Windows 98 increased to 60 to 70 MB of RAM and ran well with 64 MB of RAM. Windows XP required 138 MB of RAM and 1.2 GB of hard disk space. After the more than 93 patches to this software, XP became bloated and required at least 256 MB of RAM with 2.5 GB of disk space. An increase to 512 MB of RAM was preferable. We now have Vista from Microsoft. It requires a minimum of 1 GB of RAM, with about 3 GB of space and the newest processors to run decently.

Not to mention, hardware costs have risen because of operating systems upgrades. The same can be true for software. As newer operating systems are being unveiled from Microsoft, software companies scramble to keep up and sell newer versions of its software.

Munich was skeptical of Microsoft’s offer, even after IBM CEO, Steve Ballmer, flew to Germany to persuade Ernst Wolowicz, chief of staff to Mayor Ude, to reconsider his stance on accepting Linux. The city of Munich balked at any consideration given to Microsoft. Instead, it hired Unilog Integrate, a technology strategy company, to better investigate the differences between both operating systems and to inform the city council on which operating system had the better options.
Be it standard networking, fat-client server, thin-client server, or any combination thereof, Linux provides the best methods, a more secure connection, and a most stable platform for computing needs.
Unilog judged Microsoft’s proposal — to swap out all existing versions of Microsoft Windows and Office for the newest versions — as cheaper and technically superior. But the offer from IBM-SuSE better met “strategic” criteria set forth by the Munich council, says Harry Maack, Unilog project manager. (Acohido, 2003, n.p.)

With these reasons in mind, Munich City Council members, with IBM in tow, refused Microsoft’s last offer, which would have shaved another $8,200,000 off their bottom line. Microsoft also offered millions of dollars worth of free training and technical support. Though Microsoft’s bid was $3,800,000 lower than the SuSE/IBM offer, the Munich council voted to migrate to the Linux operating system. “Councilwoman Strobl, for one, was skeptical. “Our consultant had no time to double-check the offer, whether it was really cost effective, or whether there were hidden costs,” she says. “We did not take it seriously.” (Acohido, 2003, n.p.).

In England, the Home Educators Seaside Festival (Hes Fes) was in need of Internet access. This festival, which began in 1998 and is held in different parts of England, caters to home educators through out the world. Educators from Spain, Germany, Italy, Holland, France, England, the United States, and many other countries gather in conferences to discuss new teaching techniques and software offerings, and attend educational seminars.

The Hes Fes (Home Educators’ Seaside Festival) is a yearly gathering for people involved in or interested in Home Education; that is education of children privately in the home by the parents, rather than in state or public (private) schools. (Harris, 2007, n.p.)

This 6-day festival requires that Internet access be available to attendees. With the use of LTSP software, a thin-client server, and client machines, a tent was set up to house this network. Older laptops were used as clients with a newer desktop PC used as a Linux thin-client server. These laptops were 10-year-old, early generation Pentium machines not capable of running a modern operating system because the processor is too slow. There are no upgrades for RAM memory, and hard disk storage is in the unusable 4 Gb range. Ten years ago this laptop was a top performer. Because of the massive storage and memory required for today’s operating system, these machines are better served as dumb terminals, or thin-clients. By having the Linux server operational, these laptops will get a desktop from the server to operate as clients like PCs. Bristol Wireless and Psand.net provided internet connectivity to the server and clients.

Bristol Wireless is a community network project that began in the city of Bristol in May 2002. Its goal is to link the city via an open, community-oriented wireless network for the mutual sharing of information and opinions regarding local issues and interests. Psand.net specializes in satellite and wireless communication networks for events and outreach projects. It was established in 1996 to promote the use of GNU/Linux and free software. (Harris, n.p.)

Older and slower computer equipment can be used to build a thin-client server network at much lower cost than upgrading to newer and more expensive equipment. Thin-client methodologies using Linux came through for HES FES. It will become a yearly setting at England’s festival for educators.

All the workshops were a resounding success, particularly with the adults, and the feedback was very positive. Everybody who was involved learnt a lot of useful information. As part of the workshops, and at other times, we also distributed free copies of Simply MEPIS Linux to people interested in trying the system. (Harris, 2007, n.p.)

The Mexico City municipal government announced that it was changing over to Linux. The city of Mexico’s technical coordinator, José Barberán, and his team will migrate the city’s computers to the Linux operating system. This project will take about 2 years to complete. The city will save millions in software costs that can be better used in the city’s social welfare programs. “A program to install the free Linux operating system in public schools, for example, has reportedly saved the government $3 million in Microsoft licenses” (Scheeres, 2001, n.p.).

Mexico is no stranger to OSS. Miguel de Icaza created Gnome for Linux. Gnome is a GUI widely used in the Linux arena. This free software is just as easy to learn as Microsoft Windows®. It also provides avenues for commercial program substitution. There is a lot of commercial or closed software for Windows. Open source groups are working hard to have an equivalent program to correspond to the expensive commercial software. “The standard MS Office price tag is $250. It would take the average Mexican — earning $5 a day — almost two months to buy it” (Scheeres, 2001, n.p.).

In an effort not interrupt the city government’s daily business, the transition in Mexico City will be done slowly. Older machines will be refurbished for use, and most likely, departmental server thin-client methodologies will be used. This will conserve resources and put users as clients on equal footing. Maintenance will be reduced, although concepts training will need to be increased. Because this is a new operating system to the users, explaining underlying concepts is crucial toward the understanding and wide acceptance of this free operating system. “It’s true that most people don’t know how to use it, but the system is relatively uncomplicated to learn” (Scheeres, 2001, n.p.).

With more local, state, and governmental entities switching over to OSS and thin-client methodologies, is there enough movement in these new concepts to replace the desktop? Because of commercial software’s inherent weaknesses in security, fair virus protection at best, escalating software costs, and hardware obsolescence, thin-client server methodologies using OSS and LTSP are increasingly seen as not just a much better value where costs are concerned but also as a new more efficient desktop workstation environment. Replacing stand-alone PCs with thin-client server technology will enhance productivity, conserve monetary resources, and provide a secure workplace.

In business world wide, a growing number of IT managers recognize this unique power to simply disconnect their desktop hard disks and remove the core of what dominates their lives. Instead, they switch OFF the ‘desktop‘ and switch ON the ’thin client to terminal server’ readily available for FREE with Linux. (Rais, 2005, n.p.)

The increasing use of Linux and other thin-client methodologies opens other avenues of success in the computing market. By providing thin-client appliances to organizations, Hewlett-Packard for example, can play both sides of the fence. It can sell servers for the networks, and thin-client appliances to complement the thin-client server methodology. Its older corporate equipment in use can also be converted to thin-client. This gives HP a good deal of leverage in the computing field. HP sees the increasing jump in thin-client computing.

HP made a particular point of stating that acquiring Neoware is intended to accelerate the growth of HP’s thin-client business by boosting its Linux software, client virtualization and customization capabilities, expanding its regional sales footprint and broadening its hardware portfolio. (Vaughan-Nichols, 2007, n.p.)

Conclusion

Free OSS is gaining momentum. It is no wonder that business, government, and education are readily seeking solutions for their IT problems. The main problem involves money. IT can be a boundless money pit if done wrong. If done right, IT can actually be a company cost saver, even a money maker. No longer are organizations tied to expensive commercial software. Linux thin-client strategies can provide adequate, if not superior, solutions to present stand-alone and server-oriented constructs.

APPLICATION

SBSF 7000: THE USE OF A GNU/LINUX THIN-CLIENT METHODOLOGY

The final part of this study is to produce free OSS thin-client using SuSE Linux and associated software (see Appendix A). Novell (2006) provided the documentation that may be needed by a first-time user for the first part of building a thin-client server.

There have been several changes in OpenSuSE. This study used OpenSuSE 10.2, but a new release, 10.3, is in the market. In addition, documentation for 10.3 may differ from the 10.2 installation. Either way, this study provided the concepts required to grasp the idea of a thin-client installation.

In this research, I used a standard PC. The thin-client server contained a Pentium 4 processor running at 2.8 gigahertz; 1 GB of RAM; an 80-GB hard drive; 2 – 1 gigabit network cards (one for the local network, one for the Internet); one CDROM/RW; one floppy drive; and standard I/O to include USB, serial, parallel, keyboard, mouse, and video port. The computer was reset to all default settings according to the setup manual. “Before you use system setup, it is recommended that you write down the system setup screen information for future reference” (Dell, 2007, n.p.). I felt these to be minimal specifications for a thin-client server. Of course, more RAM memory, larger hard disks, and faster/dual processors greatly increase the speed of operation. Depending on the number of clients, the server can have lower or higher specifications. Switches can be concatenated together to connect hundreds of clients. The top limit for SuSE Linux is presently 1,000 clients. This total can be increased if necessary.

Any entity using this many clients truly needs to purchase a bona fide high-end server from a reputable company. Redundant power supplies, a RAID 5 array, multiprocessors, industrial mainboard, and an uninterruptable power supply work together in keeping this server up continuously. If one requires such a machine, one must understand that a simple “home brew” construction will not stand up to this type of severe service.

The clients on the other hand can be just about any PC with 128 megabytes of RAM and a network card. A client will most likely run with 64 megabytes of memory, provided that it is not shared with other devices such as video. In this research, I used old laptops whose batteries were no longer a viable power source. These laptops were all connected to their power supplies and standard AC wall plugs. The good thing about these laptops was that their bios function provides for PXE boot setting. This PXE boot essentially lets the laptop boot from a network through the network card. If the bios does not provide for a network boot, a special floppy diskette with the boot software, and network interface card (NIC) drivers will suffice. The boot disk for aparticular NIC can be downloaded from the site ROM-o-matic.net.

A 24-port gigabit switch was added with category 5 cables for the ports used. This switch is where everyone comes together. The thin-client server, the clients, and the rest of the world meet at the switch. The NICs for the thin-client server and the switch are 1,000 megabits per second (Mbps). Because the laptop NIC runs at 100 mbps, the gigabit switch will provide excellent data transmission characteristics and prevent a bottleneck at the switch.

There are 23 laptops and one thin-client server connected to the switch. The server NIC connected to the switch created an Intranet or its own LAN serving the laptops. The second NIC on the server connected to the standard network and the Internet. The thin-client server can also act as the firewall for its own LAN. This in itself is an important and powerful tool. While the server is in operation, using network analyzer software, one can examine packets of data entering the Intranet. This is good security. Content-filtering programs such as ProCon Latte can be freely added on the browser to filter out any unwanted content from entering the client. This is especially useful in an educational setting where young students have access to the Web.

Once the network was set up and running, Students from Grades 1 to 5 would attend computer classes for 45 minutes every Friday. Their teachers were asked to provide assignments on standard schoolwork every other week. I was there to provide and instill concepts needed to complete the lessons. In between those weeks, I would provide a short lecture and a computer lesson. The lessons for Grades 3 to 5 consisted of using word processing and presentation graphics. Occasionally, students would need to go to the Web to collect data and then write a paper on assigned topics. Printing was done through the server. Students also were tasked to do a presentation on a particular subject. Using free and open source software, such as Open Office®, and the Mozilla Firefox browser®, all lessons were completed in comparable times to commercial software.

The students in Grades 1 and 2 had other challenges. These students are not yet cognizant, or conceptually ready for this higher level of schoolwork. The teachers, a counselor, and I agreed on getting them started with keyboarding and a variety of generalized study.

After finding various free keyboarding software programs on the Web, I settled on Sense-lang.org’s Touch Typing®. This online software simply requires a connection to the Internet and a browser. This online program lets the user either paste text on the text box from other sources or do the 15 lessons provided in this course. Touch Typing® is versatile because it also contains different keyboards for different languages. Instructions are provided in a clear and concise manner. These instructions provide simple tips for beginning typists. The students were easily able to adapt to this free software.

Finding a generalized study site was more difficult. Out of many, I found a good general site that I used for all classes. “We’ll help you learn, enhance your education, and show you how to have fun doing it!” (Kidport, 2006). This Web site has generalized studies in creative arts, math, social studies, science, and other areas for students in Kindergarten to Grade 8. It is strong on math yet takes into account the grade level of the student. The science section provides interesting facets into the animal kingdom, human body, energy, and so on, in a way that engages the interest of students. The other sections were of fair educational value.

There were no discernable impediments while running the thin-client setup. Start-up time for clients was about 1.5 to 2 minutes once the server was booted and operational. Some advantages of this methodology are speed and security. With this thin-client system setup, I did notice that certain applications ran faster in a thin-client server setup than on ordinary PCs. Word processing worked at blazing speeds. The answer to this phenomenon is that once the software is loaded into memory, Linux simply distributes portions required to the laptops. There is no need to access the hard drive and reload portions of the program. As a student requests the spell-check function, it immediately displays on the screen as it is already loaded in memory.

Where security is concerned, if students need to save files to a diskette or a flash drive, they must take it to the teacher, who will insert it in the thin-client server. USB, diskette drives, and so on, do not work on the client laptops. Files cannot be stored on the clients because everything is being run by the thin-client server. Clients are being used as basic I/O dumb terminals.

If the use of devices such as sound, sub, and storage is needed on the client, the use of File System in User Space (FUSE) can provide the required file system to operate the devices. In Linux, devices are mounted to a file system. By providing a file system to the clients, devices such as sound cards, video cards, printers, USB, and so on, can be made to run on the client. In this application, FUSE was not loaded. It is worthy of another research question and would heavily extend the scope of this thin-client server research.

The construction of this thin-client methodology required some space. Enough room was made available at the far end of our library. Picture 1 shows the beginning setup for our minilab. The laptop carts that provided the laptops and printers for our research are in the center of the picture (see Figure 7).

Figure 7. The carts that are to provide thin-client laptops.

Click to view.

Figure 8 shows the laptops positioned on the tables that I used to administer the classes once the connections were done.

Figure 8. Thin-client laptops set on tables.

Click to view.

The interconnection of the laptops is shown in Figure 9.

Figure 9. Thin-client laptops being interconnected.

Click to view.

A photograph showing the elementary students during class using the thin-client methodology was removed because of a change in district policy on student permissions. Once this setup was operational, students from Grades 1 to 5 attended my Introduction to Computer classes from August to December.

In December, the Introduction to Computer course ended. A thin-client setup was constructed in a common area outside the library. This methodology was provided for college students who were in need of some computing time to finish up their courses in December. Figure 10 displays the newly constructed thin-client setup, with Figure 11 showing the login screen. Figure 12 displays my assistant Rajesh Kanisetti and me running programs on the clients.

Figure 10. Thin-client server demonstration for college students and staff.

Click to view.

Figure 11. The LTSP and open SuSE login screen.

Click to view.

Figure 12. My assistant, Rajesh Kanisetti, and I are running various programs on the clients.

Click to view.

Conclusion

The parameters are that older equipment which are not viable for installing a new commercial operating system or that has become too slow due to the current commercial operating systems patches and updates, and can boot from a network card can be given new life as a client in this thin-client methodology. Another feature of this methodology is that when a PC client finally becomes inoperable, it can be replaced with inexpensive thin-client devices that cost less than $200. These devices are mostly solid state and will provide years of economical and silent operation. OSS and operating system upgrades provide for savings in software costs. This gives school districts, businesses, and government entities the ability to redistribute saved resources to where they can be best used.

After 6 months of using and testing this Open SuSE Linux thin-client methodology in an educational setting, I can reasonably conclude that the system works as intended. Obsolete equipment was awarded new life. Replacement costs for 23 laptops were abated. We have gone full circle from mainframe and terminals, to independent PCs, to thin-client computing. OpenSuSE 10.3 is presently available, with Version 11 on the horizon. Either way, the use of OpenSuSE thin-client methodology is a winning situation for any organization wishing to lower operating constraints and increase returns on software/hardware investments.

REFERENCES

Acohido, B. (2003). Linux took on Microsoft, and won big in Munich. Retrieved from http://www.usatoday.com/money/industries/technology/2003-07-13-microsoft- linux-munich_x.htm

3Com. (1996). SuperStack II Hub 10 24 Port TP user guide. Santa Clara, CA: 3Com Corporation.

Answers.com. (2007). Mainframe: Definition. Retrieved from http://www.answers.com/main frame

Chang, P., & Kalil, C. (1977). Linux means business to the city of Garden Grove. Retrieved from http://www.linuxjournal.com/article/218

Connolly, J. (2004). Think thin (client). Retrieved from http://searchwinit.techtarget.com/original Content/0,289142,sid1_gci943333,00.html

Danen, V. (2003). Get users diskless with Linux thin client – ZDNet UK. Retrieved from http://news.zdnet.co.uk/hardware/0,1000000091.2132841,00.htm

Dell Corporation. (2007). Dell™ OptiPlex™ GX280 systems user’s guide. Retrieved from http://support.dell.com/support/edocs/systems/opgx280/en/ug/advfeat0. htm#1110952

Eicht, M. P., & Rosen, J. (2004). Real world case study: Linux thin client savings exceed 37% in just 8 months. Retrieved from http://www.desktoplinux.com/articles/AT7753498575.html

Evolution of computer networks. (n.d.). Retrieved from http://media.wiley.com/product_data/ excerpt/28/04708698/0470869828.pdf

Gabel, D. (2004). Fat is phat. Retrieved from http://search winit.techtarget.com/original Content/0,289142,sid1_gci943333,00.html

Hargadon, S. (2005). Linux thin client: How it works, benefits, & drawbacks. Retrieved from http://www.stevehargadon.com/ 2005/10/linux-thin-client-how-it works.html

Harris, M. (2005). LTSP, down by the sea: A 20-terminal Linux cybertent for education. Retrieved from http://flakey.info/hesfes05/

Hunt, B. (2005). What is the difference between Cat 5e and Cat 6. Retrieved from http://fourpair.blogspot.com/ 2005/02/what-is-difference-between-cat-5e-and.html

Kucharik, A. (2004). Thin Linux clients deliver Internet to library patrons. Retrieved from http://searchenterpriselinux.techtarget.com/originalContent/0,289142,sid39gci968183, 00.html

Kidport. (2006). Welcome to Kidport. Retrieved from http://www.kidport.com

Lettice, J. (2003). Linux in Munich – Gartner gets retaliation in prematurely? Retrieved from http://www.theregister.co.uk/2003/07/22/ linux_in_munich_gartner_gets/

Loftus, J. (2007). Microsoft Windows ousted at California school district. Retrieved from http://searchenterpriselinux.techtarget.com/ original Content/0,289142,sid39_gci 1245710,00.html

Miller, R. (2002). Largo loves Linux more than ever. Retrieved from http://www.linux.com/ articles/26827?tid=37

Mitech.com. (2003). Serial dot matrix printer technologies used today. Retrieved from http://mimech.com/printers/

Novell Incorporated. (2006). OpenSuSE 10.2 reference guide. Retrieved from http://www.novell.com/documentation/ opensuse102/Index.html

Pacific Northwest Software. (2005). Case study: United States Postal Service. Retrieved from http://www.pnwsoft.com/index.asp?page= cs/usps

Pladgeman, M. (2007). Thin client computing without “the bill.” Retrieved from http://www. bosanova.net/thinclientbill.html

Rais, M. (2005). Linux in business: The desktop is dying. Retrieved from http://www.reallylinux. com/docs/ltsp.shtml

Sadiku, M. N. O., & Obiozor, C. M. (2005). Evolution of computer systems. Retrieved from http://fie.engrng.pitt.edu/fie96/papers/434.pdf

Scheeres, J. (2001). Mexico City says hola to Linux. Retrieved from http://www.wired.com/ politics/law/news/2001/03/42456

Solomon, M. G., & Chapple, M. (2005). Information security illuminated. Sudbury, MA: Jones and Bartlett.

Vaughan-Nichols, S. J. (2007). HP to buy Linux thin client desktop company. Retrieved from http://www.desktoplinux.com/news/NS7988082612.html

Webopedia. (2007). What is a network? – A word definition from the Webopedia Computer Dictionary. Retrieved from http://www.webopedia.com/TERM/N /network.html

Wolf, D. (2004). Peer to peer. Retrieved from http://searchnetworking.techtarget.com/s Definition/0,,sid7_gci212769,00.html

Zmud, R. W., & Price, M. F. (2000).Framing the domains of IT management: Projecting the future…through the past. Ann Arbor, MI: Malloy.

APPENDIX A: STEP-BY-STEP INSTRUCTIONS ON SuSE LINUX THIN-CLIENT

Get Open SuSE 10.2.

You can either buy Open SuSE from OpenSuSE Org, or download it. It is available as 5 standalone Cds, 1 Internet installation CD, or one DVD. This software can be downloaded at: http://en.opensuse.org/Released_Version

It is advisable to use the HTTP protocol and the i686 architecture unless you are using a 64 bit processor, at which point you can download the 64 bit version.

Either the CDs or the DVD will do. I prefer the CDs as there are still many machines around without the DVD player. Burn the iso images on to the CDs. Note: You need to burn iso images to CDs. Simple copies will not work as the image is written in the form of a file, not an iso image. The use of Nero, Roxio, or any iso burning software is recommended. Note: Connect your PC or Laptop to

Install Open SuSE 10.2.

Before you install Linux, defrag your PC several times. This will move many of the files to the beginning of the drive and make more room for SuSE Linux. Note: For the most part, your PC should already be set up to boot from the CD/DVD device. If not, you should be able to set your bios to boot from the CD or DVD player.

Boot with CD-1. This will start the installation process. For the most part it is all automated. Just follow the prompts to the affirmative. In other words, a standard installation is simply answering the questions. Note: If you have ATA drives, your boot drive will be set to hda1 (hard drive a, partition 1). If you have SATA, or SCSI drives, they will most probably be set as sda1 (SCSI device a, partition 1) on the Linux partition. If you have no operating system installed on your thin-client server to be, well then, much better. More space for SuSE.

Most installations usually take CDs 1-3. Other software can be added to encompass CD 4-5. Once done, the PC will reboot. Note: You will be asked to add a root password. Write this down. If you forget this password, you will eventually have to redo the install! Linux is serious about the loss of passwords, especially the root password. You will also be prompted to add a user. The computer will normally boot into user mode. Once you’ve rebooted, your system should be ready for the Kiwi thin-client software.

Second Network card installation.

The server needs to be installed as a gateway since LTSP clients will have their own network. You will need to install a second network card. One card eth0 will connect to the internet while eth1 will connect to the intranet. The internet is your gateway to the World Wide Web. The intranet is your connection to your internal network connecting the client laptops, PCs, or devices.

Once the second card is installed, run Yast select Security and Users > Firewall. Select Service Start Manually button and press the Stop Firewall Now bar. Once done, press next and accept. This action will prevent the firewall from activating at boot time.

Configure the NICs.

Once done with the Firewall setup, select Network Devices > Network Card. This will display the Network Card Configuration Overview.

The first card will be Eth0 which will connect to the Internet. It can be easily set up. Simply select the Automatic Address Setup (via DHCP) selecting DHCP.

Eth1 needs to be given a static address. The recommendation is to use “192.168.0.254” with a net mask of “255.255.255.0”. An Ethernet cable is connected from this nic to a switch. The number of clients connecting to this server depends on the number of ports on that switch. For more clients, you can always concatenate 2 or 3 switches together. It is also recommended that no more than 60 clients be connected to a low end server.

LTSP clients are best assembled on their own network, so in order to successfully Install KIWI-LTSP, you need a completely installed and updated operating system. These instructions are based on kiwi version 1.40

The thin-client server software installation can also be found at: http://en.opensuse.org/LTSP

Adding repositories (places where the software resides on the net).

Add repositories: Yast> installation sources> add> url> copy and paste these links one at a time. http://software.opensuse.org/download/openSUSE:/Tools/openSUSE_10.2/
or
“software.opensuse.org” – for the Server Name (do not use quotations).

“/download/openSUSE:/Tools/openSUSE_10.2/” – for the Directory on Server.

The above site is where the KIWI engine, and different image creation files (build descriptions) are kept.

This next site contains the heart of KIWI-LTSP. It is the location of the LTSP specific KIWI image description. http://download.opensuse.org/repositories/home:/cyberorg/openSUSE_10.2/

Installing Server Components.

  1. Use YaST > Software Management > add these components. Go to Filter and select Search. In the Search dialog box, enter one of these at a time. The package should appear on the right hand display box. Make sure you check the box next to the selected package. Do this with all the packages below.
       yast2-dhcp-server
       dhcp-server
       yast2-nfs-server
       yast2-tftp-server
       kiwi
       kiwi-desc-ltsp
       kiwi-desc-netboot
       kiwi-pxeboot   
       
       

    These packages will pull in all the dependencies needed to run kiwi-ltsp.

  2. next, edit /usr/share/kiwi/image/kwltsp-suse-10.2/config.xml and modify
    line # 27 <source path=”/mnt/iso/”/>
    and /usr/share/kiwi/image/netboot/suse-10.2/config.xml
    line #40 <source path=”opensuse://10.2″/>
    to point to your installation source.
    This can be an OpenSuSE 10.2 ISO mounted under the local file system as shown in the default ” <source path=”/mnt/iso/”/> “, a complete copy of the OpenSuSE 10.2 DVD placed somewhere on the local filesystem\harddrive or using YaST>miscellaneous>installation server (not installed
    by default), building an NFS installation server and pointing the host folder to your local file system at /srv/exports/instsrc/10_2, so that the above lines can be modified to. <source path=”/srv/exports/instsrc/10_2″/>
  3. Open a terminal window. You need to be root or super user. You can type in su after the prompt. This will ask you for the root password. sh /usr/share/kiwi/image/kwltsp-suse-10.2/setup-ltsp.sh
    Follow on screen instructions.
    this will build:

      /srv/kiwi-ltsp/* the root of the thin client chroot (725 MB)
      /tmp/kiwi-netboot/* the construction site for the initial boot files (50 MB)
      /srv/tftpboot/* the initial boot file location for the PXE boot system (20 MB)
  4. A DHCP server needs to be configured and started using Yast>Network Services>DHCP server, set it to “start when booting” the card selection is “eth1″, a.k.a. the one you set as 192.168.0.254, you will have to modify the global settings or edit the /etc/dhcpd.conf with: (please include “” where used)
        option domain-name "yourdomainname";
        option domain-name-servers xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx;
        option routers xxx.xxx.xxx.xxx;
        next-server 192.168.0.254;
        option root-path "/srv/kiwi-ltsp";
        option log-servers    192.168.0.254;
        ddns-update-style  none;
        option option-128 code 128 = string;
        option option-129 code 129 = text;
        use-host-decl-names    on;
        filename "pxelinux.0";
        option host-name = concat ("ws", (binary-to-ascii (10, 8, "", substring (leased-address, 3, 6))), "yourdomainname");
        shared-network WORKSTATIONS {
         subnet 192.168.0.0 netmask 255.255.255.0 {
          range 192.168.0.50 192.168.0.150;
          default-lease-time 14400;
          max-lease-time 14400;
    
         }
        }
    	
    	
  5. A TFTP server needs to be created and started using YaST>Network Services>TFTP server
    choose the radio button “Enable”
    and set the “Boot Image Directory” to
    “/srv/tftpboot”
    click “finish”
    this needs to fixed — you need to make a copy of the file /srv/tftpboot/boot/initrd-netboot-suse-10.2.i686-2.1.1.kernel.2.6.18.2-34-default
    and paste the copy in the same folder then rename it “linux”
  6. The NFS server exports need to be checked and the NFS server restarted by using YaST>Network Services>NFS Server. The default exports should be set for the subnet of the LTSP clients i.e. 192.168.0.0/255.255.255.0

    The /etc/exports file should include:

      /srv/kiwi-ltsp 192.168.0.0/255.255.255.0(ro,no_root_squash,async,no_subtree_check)
      /var/opt/ltsp/swapfiles 192.168.0.0/255.255.255.0(rw,no_root_squash,async,no_subtree_check)

    These exports are set to allow wildcard clients using ” * ” or clients from the subnet “10.0.0.0″ by default and could be automated to match the subnet chosen after the DHCP server is configured.

    It’s time to boot the first client. which should leave you at the LDM (ltsp display manager) screen where you should be able to login with users created on the host. To add local device support to the clients by loading fuse on the server type in a console prompt as root:

      modprobe fuse

    and by running at the command line on the client as USER

      mkdir -p /home/$USER/mountpoint
      ltspfs clientIP:/media /home/$USER/mountpoint

    Note: This needs to be done by LTSP5′s delayed_mounter script, which needs to be fixed to work with OpenSuSE. If for any reason this should fail you will need to delete the folders:

      /srv/kiwi-ltsp/* the root of the thin client chroot (725 MB)
      /tmp/kiwi-netboot/* the construction site for the initial boot files (50 MB)
      /srv/tftpboot/* the initial boot file location for the PXE boot system (20 MB)

    This is absolutely necessary in order to start over.

Preferred thin-client server specifications:

Processor - Duo core or quad core Intel, or AMD processors.

Hard Drive - At least 60 Gig+ (will run with less).

Ram Memory – 1 Gig (2-4 Gig will accelerate clients), (will run on 512 Meg,), (need 128 Meg on clients, although they should work with 64 Meg).

Devices – Minimum of a CD-ROM (CD/RW preferred).

  • Keyboard, and Mouse
  • Monitor
  • USB Ports (not necessary to run server). Needed for printers, flash drives, other devices.
  • Parallel Port (not necessary to run server). Needed for legacy (older) printers.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tags:
Categories: Enterprise Linux, Expert Views

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

2 Comments

  1. By:cyberorg

    Hi

    Great overview of the thin client technologies, however KIWI-LTSP instructions are quite outdated, please use the instructions and repositories mentioned here:

    http://en.opensuse.org/LTSP

    Cheers

    -J

  2. By:rrdonovan

    Thank you Cyberorg for the info. It has been a while. I will survey the site soon. Rodney Donovan – Be a Microsoft Slave or be Free and Opened Sourced Software.

Comment

RSS