Linux Software Installation:

Notes, Thoughts, Comments, Etc. ...

Oct 29, 2000 -- Lloyd's observations.

Quote #1: "Your Mileage Will Vary"

Pretty commonly heard around the Open Source community.

There are so many variables, so many tradeoffs that users

want to make, that we do our best to report on what we

have found. Your results are going to be different from

ours. Our hope is that your Linux experience is as pleasant

as ours.

Quote #2: "Do I really have to re-compile the Kernal?"

and: "What's a Kernal?"

Quick answer: Hell NO! Quit listening to the super nerds

and their techno-babble and just install the damned thing.

This document will be a continuously changing guideline and

set of observations on the best way to install Linux on a

new PC. For how much PC is needed see the 'Linux-Hardware'

file. This document will change based on whenever a new

Linux distribution comes out and our experiences grow

with both Linux and JFORCES.

First, just how much machine do I really need to run JFORCES?

Knowing that this application was originally run on a SUN

minicomputer (old UNIX iron), can I afford a PC with that much

horse power? Which Linux distribution is up to the task?

Believe it or not, you will probably escape for less than $1500,

give or take a few scraped knuckles. Again: see Linux-Hardware.

The first BIG QUESTION IS: Which distribution of Linux?

The quick answer is that we have had positive experiences with:

Red Hat 6.2

SuSE 6.4, and 7.0

One Warning: Install 'almost everything' with both distributions.

(You will have to dig a little harder with Red Hat to find the

'almost everything' than you will with SuSE.)

This will give you a fully established 'developers' environment.

A lessor install will all too often leave you with a less than

ideal environment for JFORCES. Translated into user-ese, you will

waste a LOT of time tracking down little missing pieces that you

really do need to run JFORCES. A larger install also tends to

set up your database ODBC access automatically. (One less manual

operation to perform.)

Just how much is 'almost everything' going to cost me?

Picking on the SuSE 7.0 distribution -- about 6.5 gig of disk space.

That seems like 'one-hell-of-a-lot' and 'couldn't we trim that

down just a tad?'. Keep in mind that today's larger disks cost

about $3.50 a gig. Assume that 4 gig of wasted (not really needed

at the current moment) disk space will cost $14.00. It could easily

cost more than $14.00 of your time trying to save that $14.00.

A COMMENT: We 'pick' on SuSE for most of our installation numbers

because SuSE is the 'worst case' install from a pure volume standpoint.

(Or 'best' from the point-of-view that it installs so much 'stuff'

to play around with.) Red Hat is no light-weight, but you would

need to install much of the 'Red Hat Power Tools' CD(s) and lots

of other 'stuff' to bring it up to where it matches SuSE in disk


Ok, 6.5 gig for Linux, appx 1 gig for FORCES source and executables,

a little here, a little more there and ... there's got to be a

gotcha in here somewhere

There is, and it's known as the 1023 cylinder limit for booting.

Basically, the initial POST (power-on-self-test) and boot to the

operating system relies on the BIOS onboard the motherboard.

Almost all BIOSs won't/can't access anything beyond the 1023

cylinder on your disk (during initial startup). This leads to

obscure technical problems which we would be happy to discuss

at great length, at your expense, over pizza and beer.

The best way to avoid paying for my pizza and beer, is to

properly partition your hard drive and assign the right parts

of the Linux directory tree to the right partitions.

The first major hurdle in most Linux distributions is:

>>> how the #$%@!*' do I partition the disk? <<<

>>> ahhh, what's a directory tree, anyway ?? <<<

Most distributions will put all of your 'stuff' into one (really)

big happy partition, and add a swap partition. If it detects that

you want a lot of 'stuff', and that one great big happy partition

just might exceed 8 gig (or 1000+ cylinders), it will create a

small partition for '/boot', leaving you with a partition scheme

that looks something like:

/dev/hda1 /boot 8 meg >> from cylinder one to cylinder one

>> this takes care of the '1023'

>> cylinder limit for booting Linux

/dev/hda2 / 8 gig >> root, and everything else (or more)

>> basically the whole hard drive

>> except for /boot and (swap)

/dev/hda3 (swap) 100 meg >> appropriate for most systems

A little bit of poking and prodding on various Linux distributions

revealed that separating out the '/usr' part of the directory tree

had the greatest beneficial impact and kept things pretty simple.

The disk partitioning thus becomes:

/dev/hda1 / 3.0 gig >> this is root (system stuff)

/dev/hda2 (swap) 100 meg >> good working default size

/dev/hda3 /usr 5.0 gig >> lots of documentation and

>> common library files here

/dev/hda4 ????? ??? gig >> open for whatever you need

/home >> one possibility

/data >> another good possibility


Lightweight 'tweaks' can be fast, simple, and beneficial. BUT!

Our JFORCES users and developers can do better than accept a blindly

generic default partitioning scheme that is 'ok' for most systems.

Our goals are to:

(1) Spread things out to make critical areas smaller and easier

to backup (and restore) because they logically stand alone.

(2) To isolate major areas from each other. If one fails, it

should NOT be easy for a failing area to impact another.

(3) System upgrades should have NO impact on user areas.

(4) User upgrades (or new applications), should NOT have any

impact on system areas.

(5) The database data itself, should be in its own area.


The quick partitioning answer, for one hard drive,

doing a SuSE 6.4, or 7.0, full install.

Assuming appx 20 gig drive -- a good user's profile

/dev/hda1 / 3.0 gig >> this is root (system stuff)

/dev/hda2 (swap) 2000 meg >> may be less in the future

/dev/hda3 /usr 5.0 gig >> lots of documentation and

>> common library files here

/dev/hda4 [extended partition] >> holds all the following

>> 'logical partitions'

/dev/hda5 /opt 2.0 gig >> a good place for StarOffice,

>> WordPerfect, other large applications

/dev/hda6 /home 3.5 gig >> all your user's personal accounts

>> and where JFORCES programs end up

/dev/hda7 /data 6.0 gig >> All your major data/database files

>> where JFORCES database should end up


Using dual drives, 15 gig each -- more of a developer's machine.

Here, we have all of the 'system' stuff on the first drive,

while JFORCES and all its data are on the second drive.

/dev/hda1 / 2.5 gig >> this is root (system stuff)

/dev/hda2 /usr 5.0 gig >> lots of documentation and

>> common library files here

/dev/hda3 /opt 3.5 gig >> good place for StarOffice,

>> WordPerfect, other large applications

/dev/hda4 /tmp 4.0 gig >> lots of log and spool file space

/dev/hdb1 (swap) 2000 meg >> may be less in the future

/dev/hdb2 /home 6.0 gig >> all your user's personal accounts

>> where JFORCES programs end up

/dev/hda3 /data 8.0 gig >> All your major data/database files

>> where JFORCES database should end up


But I hit the data base REALLY hard, and I can salvage 10 gig drives

out of systems that are going to the dump. Can I put them to use?

Are there any 'gotchas'?

Using three drives, 10, 15, 10 gig each:

Here, we have all of the 'system' stuff on the first drive,

FORCES programs are on the second drive,

FORCES data is on the third drive.


/dev/hda1 / 2.5 gig >> this is root (system stuff)

/dev/hda2 (swap) 2000 meg >> may be less in the future

/dev/hda3 /usr 5.0 gig >> lots of documentation and

>> common library files here

/dev/hda4 /shared 2.0 gig >> for whatever you want


/dev/hdb1 /data 8.0 gig >> All your major data/database files

>> where JFORCES database should end up

/dev/hdb2 /opt 3.5 gig >> good place for StarOffice,

>> WordPerfect, other large applications

/dev/hdb3 /tmp 3.5 gig >> lots of log and spool file space


/dev/hdc1 /home 6.0 gig >> all your user's personal accounts

>> where JFORCES programs end up

Here we have split the program access versus the database access across

a pair of drives, which is good. Notice that it is also split across

different 'IDE channels', which is even better. But there is a 'gotcha'

on most systems. If you have a relatively slow CD-ROM, Zip drive, or

tape backup system attached to a channel, the channel will probably

slow down to the native transfer rate of the SLOWEST drive on that

channel. So, after installing your system, disconnect the power from

any slow drives so the hardware doesn't see it. This can get to be

a real hassle at upgrade/backup time.

System tuning, where to put your data, how many drives,

how many controllers, etc. .... is an area of constant

research. The above data/partition layouts may not be

the best. Remember above when we said 'Your Mileage Will Vary'?

Brief description of 'where is it'? 'what does it contain'?

'/' (root)

Has most of the operating system dependent files.

By default, any parts of the directory tree (file

system) that aren't assigned to their own partition

end up here.

'/usr' (user files)

Somewhat misleading directory. This is where most of the

files reside that the user thinks of as a system level resource.

Compilers, interpreters, run-time libraries, and TONS of

documentation. The users use these files a lot, but they don't

belong to any one user.

A minimal install's '/usr' won't be very large. Much smaller

than '/' (root). But once you start to install a lot of software

packages, '/usr' explodes in size. Much of the growth is

due to additional documentation files found under

'/usr/doc/...' and the associated resource commonly found

under '/usr/lib/...'.

'/opt' (optionally installed packages)

This is where many sys-admins like to install large

optional packages that are shared between many users.

Star Office, Word Perfect, the GIMP are good examples.

This is sometimes thought of as where 'office productivity

tools' should go.

'/tmp' (temporary (data) storage area)

The system likes to use this area for temporary storage, and

most programmers (and programs) know it's there for scratch

files. Individual users, with their own workstations, rarely

need to set this up in its own partition.

Servers, and heavy duty developers need it. All of the system

level log files will end up here, as well as the print spooler

files. The more users a system serves, the larger this area's

disk space (partition) needs to be.

'/home' (all user accounts)

Each user's individual files go here.

FORCES, as a user account, should go here. (/home/forces)

It will consume about one gig of diskspace.

postgres, as a user account should go here.

Same for MySql, if you need it.

'/data' (large data files and databases)





If it's big, if it's ugly, if it's a lot of data that many

users need simultaneous access to, put it here.

The overall goal is to reduce the amount of damage should something

go wrong. By spreading things out, if one disk partition is lost,

only a portion of your critical resources are lost. The portions

that are the hardest to replace (/home, /data) are isolated from the

rest of the system for easier backups and restores. The most likely

area to overflow without warning (/tmp log files) is isolated from

everyone's commonly shared resources.

It becomes a tradeoff. Do you pool everything together in one big

happy partition, which gathers ALL of the free disk space together?

Or, do you waste free disk space (spreading it across multiple

partitions) knowing that if one area fills up it won't affect the others?

Do you watch your system resources like a hawk, or are you more

cavalier? Do you have average users, who are most likely to want

heavy duty access to your databases, or developers that are likely

to be all over the system pushing the limits of anything that can

be pushed? Do you have lots of 'pizza and beer' money for late

afternoon consulting fees? Our answers won't be any more lucid,

but we'll all feel good about them.

It is now early November of 2000, Intel is scheduled to release new

processors by the end of the month (P-IVs ?), memory prices are falling

(dramatically in some cases), and our answers to your Linux hardware

and software configuration questions -- will vary as much over time as

your mileage -- in this still rapidly evolving world of computers.