A Crash Course in Tcl

Table of Contents

1. Introduction

Tcl (normally pronounced tickle) is a simple, fast, and extensible scripting language. It is well suited to string processing tasks, and has a sufficiently large standard library of commands to be useful for everyday tasks. Tcl is extremely easy to add to existing programs as a ready-made scripting solution, and its distribution terms are rather loose. Tcl is a worthwhile language to learn, and this short tutorial is meant to get you started.

The Tcl scripting language originated from Berkeley, where Dr. John Ousterhout and his students at Berkeley found themselves implementing ad hoc scripting languages to control various software tools. Realizing the wastefulness of writing a new script interpreter for each software project, Dr. Ousterhout sought to build a single reusable, embeddable scripting language. Tcl has been through many changes since its inception, and it has superceded its original purpose. Tcl is now widely used for scripting on many platforms.

This tutorial assumes at least a basic familiarity with the fundamental concepts of computer programming. I have aimed for clarity rather than conciseness; I realize that this introduction may be a bit slow for experienced programmers.

2. Tcl Statements

Tcl programs consist of statements (individual pieces of a program that do something), and each Tcl statement generally occupies one line (though statements may also be separated with semicolons, and they may span multiple lines under certain conditions). Each statement in turn consists of a command (a word that makes Tcl do something) and any number of arguments (pieces of information that control how the command operates). These arguments are separated by whitespace (spaces or tabs) and are affected by certain grouping symbols. A typical line of Tcl code has the following structure:

something arg1 arg2 {arg three} "arg four"

This line of code begins with the command "something" and contains the four arguments "arg1", "arg2", "arg three", and "arg four". Note that the curly braces and quotation marks group text into arguments; in this example, curly braces prevent "arg" and "three" from being treated as separate arguments, but rather as "arg three". Curly braces and quotation marks serve slightly different purposes, which we will clarify shortly. Let’s now take a look at a simple but complete Tcl program.

puts {Hello, world!}
set zealotry {Tcl rules!}
puts "w00t! $zealotry"

The first line consists of the command puts, and one argument. The text Hello, world! is enclosed in curly braces ({ }), so it comprises a single argument. puts prints its argument to a file (or the terminal if no file is specified), so this line of code causes "Hello, world!" to appear on the terminal.

The second line of code sets a variable. Tcl variables associate names with pieces of data. This line of code associates the text "Tcl rules!" with the name "zealotry", so that the program can retrieve it later. Again, curly braces group the two words "Tcl rules!" together into a single argument. Curly braces prevent variable substitution (which we will discuss shortly); dollar signs and other special characters act like ordinary characters inside curly braces.

The third line of code is similar to the first, except that the argument to puts uses quotes instead of curly braces. The only difference is that quotes allow Tcl to perform variable substitution, whereas curly braces do not. Any dollar sign ($) followed by a variable name is considered a request for substitution, and Tcl will replace this text with the value of the given variable. In this case, Tcl will replace $zealotry with the value we gave that variable earlier, "Tcl rules!". The finished piece of text is given to the puts command, and so Tcl prints out "w00t! Tcl rules!". If a variable name contains special characters or spaces, you can protect it with curly braces. For instance, if our variable were called scripting envy instead of zealotry we would have to write it as ${scripting envy}.

Variable substitution can actually take place anywhere, not just inside quotes. For instance, puts $zealotry would simply print the value of the variable zealotry, and puts $zealotry$zealotry would print the variable twice. However, puts $zealotry $zealotry would not work, since this would parse as a command with two separate arguments.

3. Commands and Substitution

Every Tcl command has a return value (a piece of data produced by the command). Many commands have an empty return value (that is, they don’t return anything useful); this is represented by an empty string. However, most commands return do return useful information. For instance, the expr command evaluates mathematical expressions (such as (4 * 5 + 3) / 10) and returns the resulting number. Scripts use command substitution to access return values.

Command substitution is similar to simple variable substitution, except that it substitutes the return value of a Tcl command instead of the value of a variable. Square brackets ([ ]) trigger command substitution.
Let’s examine a simple example of command substitution. In this example we will use the expr command to calculate a number, and then print it out with puts.

set num [expr 1+2+3]
puts "The result is $num"

The first line of code sets the variable "num" to the return value of the expr command. This return value will be the number 1+2+3, which is 6. Therefore, this line sets "num" to the number 6. The second line uses simple variable substitution to print out the value of the variable "num".

Command substitution works inside quotes, but not inside curly braces. For instance, we could have written puts "The result is [expr 1+2+3]" instead of breaking the script into two statements. It is also entirely valid to mix multiple substitutions of different types in the same statement. Scripts commonly build large strings by combining multiple variables, commands, and special characters.

4. Tcl Evaluation

As you begin to construct increasingly complex Tcl statements, it is important to understand the exact order in which substitutions are performed. We will provide a sufficient summary here and leave the exact details to a reference manual. Please understand that this description is just a model, and perhaps a slightly oversimplified one at that. The Tcl interpreter performs various optimizations internally, but those are irrelevant to our discussion.

5. The Tcl Order of Evaluation

  1. The Tcl file is broken into statements, separated by newlines or by semicolons. However, newlines and semicolons enclosed within quotes or curly braces are ignored. In addition, statements beginning with a hash (#) character are ignored to allow comments to be added to the program. Comments extend from the opening hash mark to the end of the line.
  2. Backslash substitution is performed. This replaces certain combinations such as \n and \t with the corresponding control characters (in this case, newline and tab). This is an important feature, since some special characters are difficult to insert into a source file.
  3. The statement is broken into words. Words are groups of characters, possibly contained in curly braces or quotes, separated by spaces.
  4. Variable substitution is performed once on the entire statement.
  5. Command substitution is performed once on the entire statement.
  6. The first word is treated as the name of a command, and Tcl attempts to execute this command with the rest of the words as arguments.

6. Recursive Evaluation

Tcl would be a weak language if not for recursive evaluation. This feature allows scripts to evaluate parts of themselves as separate scripts, and substitute the results into other data. For instance, a Tcl script can invoke the Tcl interpreter on a piece of user-speficied text, or on a script prepared by the program itself. While this may sound silly and useless at first, it is actually a very convenient and necessary feature.

The eval command combines all of its arguments into a string (piece of text), and then runs the Tcl interpreter on it. eval returns the result of the code it evaluates (as a string).

Let’s look at an example of recursive evaluation with the eval command. In this example, we will use recursive evaluation to perform double substitution.

set varname {$message}
set message {Hello, world!}
eval "puts $varname"

This example begins by setting the variable "varname" to "$message". Note that the $ does not cause variable substitution in this case, since it is enclosed with curly braces. The script then sets "message" to "Hello, world!".
Now for the tricky part. The third line begins with the eval command. However, eval is a normal Tcl command, and Tcl performs its normal substitution passes before executing the command. After substitution, this line becomes equivalent to eval {puts $message}. The eval command evaluates this argument like any other piece of Tcl code, and puts prints "Hello, world!" to the terminal.

7. Control Structures

A procedural language such as Tcl would be fairly useless without a mechanism to control the flow of its execution. Tcl provides several commands conditionally or repeatedly executing sections of code. This allows programs to make decisions based on their input.

The most important Tcl control command is if. This command uses expr to evaluate a logical expression (such as $x = 1 or 2 5), and invokes eval on a given piece of code if the expression’s value is not zero. (Note: the expr command returns 1 if a logical expression is true, or 0 if it is false. You can also use normal mathematical expressions with the if command, but this is less useful.) Here is a simple example of Tcl’s if command:

set number 1
if {$number = 1} {
puts "The number is equal to one."
} else {
puts "The number is not equal to one."
}

The if statement checks to see if the variable "number" is equal to one. If it is, if calls eval on the next argument, which is a puts command enclosed in curly braces. If the value is not equal to one, if invokes the block of code after the (optional) else argument. This allows the program to choose between two alternatives based on the outcome of a simple test.

Note that if is a command, just like any other. It is not a reserved word, and a script could even redefine the meaning of if (though this would probably be a bad idea). Tcl is a unique language in that it has no reserved words.

Keeping FreeBSD Up-to-date

Table of Contents

1. Introduction

When I started using FreeBSD back in the
2.2.2-RELEASE

days, I had little
knowledge of the tools available for keeping the system updated with
the latest versions of programs and security fixes. Most of the tools
which existed for updating FreeBSD are still in use today, along with a
number of extra applications for making them easier to use and more
comprehensive.

2. Staying Current to the Minute with CVSup

The CVSup
utility by John Polstra, is recognized as the standard for keeping system
files on a FreeBSD system up-to-date. CVSup automates the process of
connecting to a server and mirroring the files in a specific collection,
only downloading as little data as necessary to update your local files to
the versions available on the FreeBSD master CVS repository.

There are alternatives to CVSup for staying current, including anonymous
CVS and CTM, as well as ordering CD and DVD subscriptions. More information
can be found in the FreeBSD Handbook:

http://www.freebsd.org/handbook/mirrors.html

2.1. Installing CVSup

It is possible you may have already installed CVSup during installation.
To find out, try typing cvsup -v at a shell prompt. You should
get output similar to the following:

CVSup client, GUI version
Copyright 1996-2001 John D. Polstra
Software version: SNAP_16_1e
Protocol version: 17.0
Operating system: FreeBSD4
http://www.polstra.com/projects/freeware/CVSup/
Report problems to cvsup-bugs@polstra.com

If the software version is older than SNAP_16_1e, or if you don’t have
CVSup installed, then you will need to install the latest version
(older versions of CVSup have a
timestamp bug caused by
the S1G rollover).

Although you can install CVSup from the Ports Collection (covered later),
I recommend you install the statically-linked package. The non-static
version of CVSup depends on the Modula-3 libraries, which may take a long
time to build and install, and unless you’re a Modula-3 developer, you
probably won’t need them for anything else. To install the package, run
the following command (as root):

pkg_add http://people.freebsd.org/~jdp/s1g/i386-gui/cvsup-16.1e.tgz

This will fetch and install the CVSup package.

2.2. Creating cvsup files

To run CVSup, you need a "supfile", which instructs CVSup what server,
collections, and update options to use. I use a number of pre-built
CVSup files found below (you may want to change the "host" line to
a closer mirror. You can download these files to any directory you
want; common locations are /usr/local/etc, /usr/cvsup, and
/usr/local/share/cvsup.

2.2.1. Update the Ports Collection

Download ports.cvsup.

*defaulthost=cvsup3.freebsd.org
*defaultbase=/usr
*defaultprefix=/usr
*defaultrelease=cvs
*defaulttag=.
*defaultdelete use-rel-suffix

ports-all

2.2.2. Update the sources to the latest 4-STABLE

Download stable.cvsup.

*defaulthost=cvsup3.freebsd.org
*defaultbase=/usr
*defaultprefix=/usr
*defaultrelease=cvs
*defaulttag=RELENG_4
*defaultdelete use-rel-suffix

src-all

2.2.3. Update the source to the latest 5-CURRENT

Download current.cvsup.

*defaulthost=cvsup3.freebsd.org
*defaultbase=/usr
*defaultprefix=/usr
*defaultrelease=cvs
*defaulttag=.
*defaultdelete use-rel-suffix

src-all

3. About FreeBSD Ports and Packages

The FreeBSD packaging system is quite a bit different than packaging
systems for Linux distributions. "Packages" in FreeBSD are always binary,
and will usually install all development headers, libraries, and other
resources instead of using a separate "-devel" package (to confuse things
further, "-devel" packages in FreeBSD usually refer to snapshots of
packages which are under development).

The Ports Collection usually builds from source code, and allows much
greater flexibility in compile-time options (such as enabling or disabiling
different features). All packages actually originate as ports.

Everything in the package system (including ports) revolves around
the package database, which is found in the /var/db/pkg directory. Each
installed package is represented by a subdirectory, and instead the directory
are a number of text files which store data about dependencies and what
files belong to the package. This means that editing the package database
is simply a matter of adding and removing directories, and editing the
text files they contain.

Tip:
After updating your Ports Collection, you can get a list of ports which
have newer versions available with the following command:
pkg_version -l '' -v

3.1. Packages: Installing Apps

The FreeBSD package system allows you to install pre-compiled applications
via package files. FreeBSD package files end with the extension ".tgz",
which may be confusing, but accurately describe the fact that they are simply
gzipped tarballs with a few extra files for install scripts and packing
lists (among other things).

Packages can be found on the FreeBSD CDs or off the net, usually from
ftp://ftp.freebsd.org/pub/FreeBSD/packages/ (or a closer mirror).

There are two ways of installing packages: The sysinstall(1) utility, which
provides an ncurses interface to managing packages, and the command-line
utility pkg_add(1).

3.1.1. Installing via sysinstall

sysinstall(1) is the same tool which is used for installing FreeBSD,
hence the name. sysinstall can fetch package lists from many different
media sources (including CD, FTP, and NFS) and build a menu of installed
and available packages from that source.

To install packages via sysinstall, run sysinstall as root:

# /stand/sysinstall

From the menu, select "Configure", then "Packages". You should be
prompted to select a source. Once you have selected a source, sysinstall
will download the package list from that source and display a menu of
package categories. When selecting packages, if you select one package,
then all of its dependencies will automatically be selected (they will be
marked with a "D"). Packages to add will be queued until you select
"Install Packages" from the category menu.

To uninstall packages, deselect the item. The package will be uninstalled
immediately.

Note: I really only recommend that you use sysinstall when
installing packages from a CD, since it doesn’t understand changes to
package categories very well.

3.1.2. Installing via pkg_add

pkg_add(1) is actually the utility which gets called by sysinstall
when installing packages. To use it, specify a package file to install
(pkg_add understands URLs, so you can point it at a package from the
web or FTP). Example:

pkg_add mozilla-0.9.9_1,1.tgz

To uninstall a package, use pkg_delete(1). Note that you will have
to specify the version number of the package:

pkg_delete mozilla-0.9.9_1,1

To find information about a package, use pkg_info(1). Just as with
pkg_delete, you must specify the version of the package.

Tip:
To use auto-completion of package names, specify the full path to the
package, e.g., "pkg_delete /var/db/pkg/mozilla-0.9.9_1,1".

Tip:
To find out what package a file belongs to, use
"pkg_add -W filename".

3.2. Ports: Installing Apps the Easy Way

The FreeBSD Ports Collection allows greater control over the install
process, and is always more up-to-date than the packages since the official
packages all originate as ports.

The Ports Collection is usually installed in /usr/ports, and is
arranged by category. To install an application from the Ports Collection,
simply cd into the directory for that application (e.g.,
"/usr/ports/www/mozilla") and type (as root) make install.
The source code for the application will be downloaded (consulting mirrors
if necessary), checked, configured, built, and installed as if it were a
package. Any dependencies for the app will also be installed automatically,
using the Ports Collection.

Also, some ports may have special warnings, messages, or options for
building and installing (for example, the www/mod_php4 port displays a
menu allowing you to select what compile-time features of PHP to
enable).

Once a port has finished installing, typing make clean will
clean up the work files.

Since installing a port adds an entry to the package database, you can
uninstall it by using the pkg_delete(1) command.

To build your own package from a port, use make package.
The resulting package can be found in /usr/ports/packages/All, with
symlinks to it in /usr/ports/packages/Latest and in
/usr/ports/packages/(category). In fact, this is exactly the system
used by the FreeBSD master package builder to periodically generate
new packages.

3.2.1. Finding Ports

Finding what directory a port lives in can be daunting at times, due to
the large size of the Ports Collection. There are three ways to find
where a particular port lives:

  1. Use FreshPorts. FreshPorts, at http://www.freshports.org/
    is essentially the FreeBSD equivalent of
    Freshmeat.net. You can search
    for ports, view descriptions, and view changes made to the port.
  2. Use ‘whereis’. The whereis(1) command includes the ports
    directories in its search, so you can try to guess the name of the port:

    # whereis mozilla
    mozilla: /usr/ports/www/mozilla

  3. Use ‘make search’. The Ports Collection has a search feature
    built-in. Example:

    # cd /usr/ports

    # make search key=’mozilla’

When installing modules for a particular language, ports entries have
a specific naming convention to make them easier to find:

  • Perl: Perl ports are prefixed with "p5-". The "::" in module
    names is converted to a "-". For example, the popular "Date::Manip"

    module is found in /usr/ports/devel/p5-Date-Manip.

  • Python: Python ports are prefixed with "py-" or "py22-" (if
    the module is only usable with Python 2.2).
  • Ruby: Ruby ports are prefixed with "ruby-".

3.2.2. Setting Port Options

The file /etc/make.conf sets all global options for building ports
(as well as options for building the kernel and the world). The defaults
can be found in /etc/defaults/make.conf. Any entries in /etc/make.conf
will override the defaults. There are options for specifying options to
gcc (CPUTYPE, CFLAGS), whether to install documentation with ports
(NOPORTDOCS), what version of XFree86 to use (XFREE86_VERSION),
and export control settings (USA_RESIDENT).

Tip:
Many ports have the option of building Gnome-specific versions. By default,
if Gnome is installed, then any ports installed afterwards will be
build with Gnome options. To disable this behavior, either add
WITHOUT_GNOME=1 in /etc/make.conf or type it at the
command line before installing the port.

Tip:
If you are running XFree86 4.x (instead of the default 3.x), add
XFREE86_VERSION=4 in make.conf. Otherwise, ports which
depend on X will attempt to compile XFree86 3.x as a dependency!

3.3. Portupgrade: Installing Apps the Even Easier Way

To address the problem of how to cleanly upgrade a package or port to
the latest version, Akinori MUSHA
has created the Portupgrade suite of tools, which automate the process
of using the Ports Collection.

Installing Portupgrade is simple from the Ports Collection:

# cd /usr/ports/sysutils/portupgrade
# make install

Note that Portupgrade is written in Ruby, so the Ruby interpreter will
need to be installed (this should be done automatically when installing
from the Ports Collection).

Once installed, Portupgrade maintains its own package database which
mirrors the real database. Whenever you make changes to files within
/var/db/pkg, it automatically detects the changes and updates its internal
database.

An example of using Portupgrade to upgrade a package using the Ports
Collection:

# portupgrade mozilla

That’s it, no version number needed (it automatically figures out which
version of an installed package needs updating).

If you want to upgrade a package via a package (and not the Ports
Collection), use the "-p" option with Portupgrade. It will attempt to
fetch the package from the FreeBSD master FTP site, and failing that, will
build from the Ports Collection. (For more options about this, see the
portupgrade(1) manpage).

While you can still use pkg_add and make install
to install packages and ports (respectively), Portupgrade provides the
"portinstall" utility which has many of the same options as Portupgrade.

Portupgrade also contains a utility to check and fix the package database
if something bad happens (such as circular dependencies, multiply-installed
packages, etc.). The command "pkgdb -F" will scan the package
database, verifying that all entries make sense. It will ask you how you
want to fix any problems encountered (with a bit of logic to suggest an
appropriate solution).

3.4. A Note About Version Numbers

The version numbers used in FreeBSD packages may seem a bit odd. This
is due to the way ports are marked. For example:

mozilla-0.9.9_1,1

In this example, "0.9.9" is the version of the application itself.
The "_1" is the "portrevision". Every time a change is made to a port
which can affect the installation (such as adding a patch to fix a security
hole or fixing the list of files installed), the portrevision is bumped up
by one. When a new version of the application is released, the
portrevision is reset.

The ",1" in the version number is called the "portepoch". This value
is incremented by one every time a port is reverted to an older version due
to a major issue (such as security problems, licensing or distribution
changes). The portepoch may never decrease.

4. Upgrading to the Latest Version

4.1. Branches

The FreeBSD project develops two different major branches concurrently,
as well as many smaller branches. The two main branches are "-STABLE"
and "-CURRENT". The way releases come about is through the following
process:

  • Patches to patches are first applied to -CURRENT, usually with an
    "MFC-after" note, which specifies the length of time for testing.
    (MFC stands for "Merge From Current").
  • Eventually, most patches are merged into the -STABLE branch.
  • Periodically, on a release schedule which takes into account major
    changes in features and bundled applications, the FreeBSD core team will
    mark a particular snapshot of -STABLE as "-RELEASE".

As of this article, the current stable version of FreeBSD is
"4-STABLE" and the current development version is "5-CURRENT".

4.2. Building the New Version

First, use CVSup to update your sources to the latest version (see
section 2.2). Next, do the following steps:

# cd /usr/src
# make world
# make buildkernel KERNCONF=GENERIC
# make installkernel KERNCONF=GENERIC
# mergemaster

If you are using a custom kernel configuration, replace
"GENERIC" with the name of your kernel configuration file.

5. References

Configuring Netscape on UNIX

Table of Contents

1. Introduction

Everyone has used Netscape and everyone seems to share the same waffling opinion.
I hear, "I hate it!" and "It’s great!" from the same people…sometimes even in the same
day. While Netscape on UNIX can leave a bit to be desired in terms of the quick end-
user experience, it is a reasonable browser and is often times needed for some
applications.

I use Netscape on UNIX more than any other browser, but I also switch between
Netscape, Internet Explorer, Mozilla, and OmniWeb. I have not found a perfect browser
and I don’t think one will ever exist. Netscape irritates me the least, probably because
I’ve been using it the longest. If you’re cringing right now, then what I’m about to
explain may be what you’re looking for. If you’re ever forced into using a system with
only one browser and that browser is Netscape, then the information provided in this
document could prove to be useful at some point in the future.

2. Getting Netscape

If you lack Netscape, you can get the latest release from ftp.netscape.com in the
/pub/communicator directory.

3. Understanding Netscape on UNIX

The UNIX version of Netscape was originally written for IRIX. After that initial release, it was eventually ported to other UNIX
platforms as the need arose. Unfortunately, each UNIX system is slightly different than
the other, making for a common code base between them all being close to impossible.
You need to understand the time period when this was all happening and that we had
no GTK+ and we didn’t have Qt, and Linux wasn’t even really usable yet. We have to
make mistakes to move forward.

With that in mind, Netscape settled on the 1.2 version of Motif. It worked across all the
UNIX platforms they wanted to support and it cut down on development time because
they did not have to create a new toolkit. In addition, many vendors were already
shipping Motif 1.2 or were on the verge of doing so, with the advancement of CDE
across most major UNIX operating systems. By choosing Motif, Netscape would (in
theory) be welcomed into the CDE world if it looked and acted like other CDE programs.

The choice of Motif for the Netscape on UNIX toolkit made supporting Linux rather
difficult. Since Motif costs money, they could not assume people would have it. So the
only real choice was to statically link Netscape on Linux with Motif. This is one of the
big reasons why it is so slow on Linux. If you use Netscape on IRIX or Solaris, where a
dynamically linked version is available, you will notice a slightly more responsive UI.

So, it is a Motif program and works like most other Motif 1.2 applications. Most people
on Linux first experience Motif under Netscape, which is probably bad because Motif is
a nice toolkit to work with. Lastly, Motif suffers from something that cannot be
corrected: it’s ugly. So, we just create new less-ugly toolkits and move on to using
those. Even still, Motif will have to remain a staple of most commercial UNIX operating
systems because it is so well established in that user base. Netscape Communicator
will continue to use Motif, but the newer Netscape releases based on Mozilla have
moved to using GTK+ on the UNIX platform. A bit slower, but a little nicer and more
"current".

Lastly, it’s worth noting that Netscape does not behave like a normal X application. This
is good and bad, so I won’t bother arguing either side there. Just understand that it
doesn’t respond to normal X command line options, but it does support useful options
that give you essentially the same type functionality (some are even unique to Netscape
and browsing in general). Remember that Netscape on UNIX works across many UNIX
platforms, so trying to conform to each of those standards would have been more time
consuming than settling on a standard across all the Netscape versions.

4. Configuring Netscape

Netscape makes use of X resources as well as its own custom configuration file. More
things can be configured via the custom file, but X resources usually provide a quick
and easy way to change a setting. First, I’ll explain X resources:

4.1. X resources

The .Xdefaults file in your home directory holds X resources for any applications
that support them. You may already have X resources for XTerm or a similar
program. The format of an X resource is:

Program*resource: value

For a listing of pretty much all possible X resources for Netscape, have a look at
the Netscape.ad file in your Netscape program directory (probably
/usr/lib/netscape, /usr/local/netscape, or /opt/netscape). This file has all the
Netscape X resources complete with comments and possible values.

With that in mind, here are some handy X resource settings for customizing
Netscape. Put these in your ~/.Xdefaults file:

  1. Disable the useless buttons, like Shop, on the toolbar:

    Netscape*toolBar.destinations.isEnabled: false

    Netscape*toolBar.myshopping.isEnabled: false
    Netscape*toolBar.search.isEnabled: false
    Netscape*toolBar.viewSecurity.isEnabled: false
    Netscape*toolBar.home.isEnabled: false
    Netscape*toolBar.print.isEnabled: false

  2. Change the default text selection color:

    Netscape*selectForeground: White

    Netscape*selectBackground: Blue

  3. Disable the splash screen on startup:

    Netscape*noAboutSplash: true

  4. Add a Find button to the toolbar:

    Netscape*toolBar.userCommand1.commandName: findInObject
    Netscape*toolBar.userCommand1.labelString: Find
    Netscape*toolBar.userCommand1.commandIcon: Search

  5. Create a custom reply message for Netscape Messenger:

    Netscape*strings.21928:%s proclaimed:p>

  6. Disable the BLINK> tag:

    Netscape*blinkingEnabled: false

  7. Send error messages to the console and not popup dialogs:

    Netscape*useStderrDialog: false
    Netscape*useStdoutDialog: false

There are many other things you can configure through X resources, be sure to
have a look at the Netscape.ad for more ideas.

4.2. ~/.netscape/preferences.js

Settings in this file are the other way to configure Netscape and for some things it
is the only way. This file cannot be edited while Netscape is running because the
program reads in all the values when the program starts and then writes out a
new copy of the file when it exits (or crashes). Be sure to exit Netscape first
before editing this file.

The toolbar buttons can be disabled through this file:

user_pref("browser.chrome.disableMyShopping", true);
user_pref("browser.chrome.disableNetscapeRadio", true);
user_pref("browser.chrome.useGuideButton", true);

You can also change the default start page for Netscape Messenger through this
file:

user_pref("mailnews.start_page.enabled", false);
user_pref("mailnews.start_page.url", "http://your.url.here");

Netscape keeps a fairly comprehensive list of settings that can be present in this
file on their web site. The URL is:

http://developer.netscape.com/docs/manuals/communicator/preferences/

Be sure to check that site for additional settings you can configure through the
preferences.js file.

5. Fonts

Fonts can be difficult to work with in Netscape. The defaults are pretty bad and it
usually turns people away because they can’t read Slashdot in 6 pt Times or something
like that. There are several tricks to getting the fonts looking better under Netscape.

  1. Motif Fonts: One of the first things I do on a system is change the fonts that Motif
    uses to draw text on the toolbar buttons and menus. You do this with a set of X
    resources:

    Netscape*fontList: nexus

    Netscape*XmTextField.fontList: nexus
    Netscape*XmText.fontList: \
    -adobe-courier-medium-r-*-*-*-120-*-*-*-*-iso8859-*
    Netscape*XmList.fontList: \
    -adobe-courier-medium-r-*-*-*-120-*-*-*-*-iso8859-*
    Netscape*menuBar*historyTruncated.fontList: \

    -*-helvetica-medium-o-*-*-*-120-*-*-*-*-iso8859-*
    Netscape*popup*fontList: \
    -*-helvetica-medium-r-normal-*-*-120-*-*-*-*-iso8859-*
    Netscape*licenseDialog*text.fontList: \
    -adobe-courier-medium-r-*-*-*-120-*-*-*-*-iso8859-*

    You can use the xfontsel program to generate new font resource lines if you
    want to use different fonts for your menus and buttons.

  2. Font Server: The method below is specific to a system using XFree86. I hear it
    works nicely, but I’ve never tried it. These changes should be made outside of X.

    1. Force 100dpi on the local display by opening the Xservers configuration
      file for xdm. On FreeBSD, this is /etc/X11/xdm/Xservers. Change this
      line:

      :0 local /usr/X11R6/bin/X

      To:

      :0 local /usr/X11R6/bin/X -dpi 100

    2. Modify the font server to use 100dpi fonts by default. Open the font server
      configuration file (/etc/X11/fs/config on FreeBSD) and change this line:

      default-resolutions = 75,75,100,100

      To:

      default-resolutions = 100,100,75,75

    3. If you like smaller fonts over larger ones, change the default-point-size in
      the font server configuration file to a smaller value. The units for this value
      is decipoints (a misnomer if you ask me, since the config file setting is
      default-point-size and not default-decipoint-size), so 120 means point size
      12.
    4. Lastly we can tell the X font server to not serve out scaled fonts. This can
      be considered optional. In the font server configuration file again
      (/etc/X11/fs/config on FreeBSD), find the catalogue line. It will have a
      series of directories listed, somewhat like a path variable (but with
      commas). For every directory that has a ":unscaled" equivalent, remove
      the one without the ":unscaled" suffix. If you’re missing the ":unscaled"
      ones, I suppose you can ignore this step.
  3. TrueType Fonts: If you’re using XFree86 4.x or other server that supports
    TrueType fonts, you can add *.TTF font files from Windows or another source.
    You cannot use TrueType fonts from MacOS though, different file format.

    Microsoft offers common TrueType fonts for free on their web site:

    http://www.microsoft.com/typography/

    Unfortunately you can only get them as self-extracting executable files, but I’m
    sure you can find a Windows machine to extract them on.

    Once you have the TrueType fonts you want installed, copy them to
    /usr/X11R6/lib/X11/fonts/TrueType (make that directory if it doesn’t exist). You’ll
    want to make sure you have the freetype module loaded. Check your
    /etc/X11/XF86Config file for that. Before X can use the fonts, you’ll have to
    create a fonts.dir file for the TrueType font directory. Use ttmkfdir to create the
    file (freshmeat search if you lack the utility). Once you have the TrueType fonts
    installed, you can select them in the Netscape preferences dialog.

6. Network Settings

Everyone knows the Netscape thing that happens when you try to pull a site that isn’t up
or no longer exists: your browser hangs until the network connection times out. Very
annoying. Fortunately, Netscape has a few preferences.js settings that allow you to fix
some of these things.

  1. Load images after loading text.

    user_pref("images.incremental_display", false);

  2. Increase the maximum number of simultaneous connections that Netscape
    keeps open. The default is 4, which is suitable for a slow modem (that’s
    redundant). Modem users can bump this to 6, and direct connection users
    can safely use any value higher than 8.

    user_pref("network.max_connections", 12);

  3. Favor UI refreshes over network activity (this is the one that fixes the problem
    described above).

    user_pref("network.speed_over_ui", false);

  4. Increase the size of the TCP buffer.

    user_pref("network.tcpbufsize", 256);

  5. Decrease the network connection timeout for quicker identification of dead
    sites.

    user_pref("network.tcptimeout", 25);

    NOTE: I used 25 for this for several weeks and ran into some issues. It was
    timing out before the system had completed the DNS lookup. Might want to
    use 64 or 128 or even higher depending on your connection speed.

7. Plug-ins

On Windows or MacOS, a browser plug-in is easy to install and often times already
there. Under UNIX we don’t have a lot of plug-ins, and the ones we do have don’t have
automatic installers. The following is a list of available plug-ins and where you can get
them. Installation of each plug-in varies, but they all come with instructions that explain
how to get it working.

7.1. Macromedia Flash Player

Macromedia offers version 5.0 of its Flash Player for Linux on Intel hardware.
Download it from their site and follow the instructions on the download page to
get it installed. The URL is:

http://www.macromedia.com/shockwave/download/

Macromedia also offers Flash Player for IRIX and Solaris SPARC, but not Linux
on non-Intel architectures.

7.2. Acrobat Reader

Acrobat Reader is an oddball (ever heard of the Acrobat Reader crawl?). It can
be configured as a helper application or a plug-in. Helper applications are
explained in the next section. To use Acrobat Reader as a plug-in, create a
symbolic link from the nppdf.so file in your Acrobat program directory to the
Netscape plugins directory.

7.3. Plugger

Plugger is a streaming media plug-in for Netscape on UNIX. It spawns external
applications to handle the content. For more information:

http://www.hubbe.net/~hubbe/plugger.html

7.4. Unix MIDI Plugin

In the event that you encounter a web page with useful background MIDI music,
the Unix MIDI Plugin is what you need. It uses TiMidity for software-based
wavetable synthesis. Who cares?! If you really want to install this one, check
out this page:

http://unixmidiplugin.tripod.com/

7.5. RealPlayer

Real Networks offers RealPlayer for the Linux platform, as well as several other
UNIX platforms. The recommended way of using it with Netscape is to configure
it as a helper application, as described in the section below.

8. Helper Applications

Netscape can be configured to spawn external programs when it encounters a certain
document type. The nullplugin handles this, despite having a name which makes it
sound pointless. I like to configure Netscape to spawn Acrobat Reader for PDF files
and RealPlayer for that sort of content. It’s quite easy to configure and can even be
done through the Preferences dialog. But, there is a very useful site that is actively
maintained that provides a replacement .mailcap file for use with Netscape which has all
kinds of helper applications configured. The author provides precompiled binaries for
Solaris, but he also notes where he got the source so you can build it on your own. The
site is:

http://home.swipnet.se/~w-10694/helpers.html

I won’t bother reproducing that here because the author of that site does a great job at
explaining it all. I usually don’t use his entire mailcap, but rather go and add things as I
need them.

NOTE: Some of the software he uses is really really old, but it still works. There are
alternatives that you can configure instead, but remember that you want a helper
application to start quickly, which is one advantage to using old featureless software.

9. Command Line Switches Specific to Netscape

There are several command line options available with the UNIX version of Netscape.
For starters, you can start any component of Netscape Communicator by using an
option on the command line when you run netscape:

Open Messenger -messenger
-mail
Open Composer -composer
-edit

There are several UI command line switches that you may find useful:

Specify the X display to use -display [number]
Specify the X visual to use -visual [number]
Disable the splash screen -no-about-splash
Ignore window geometry saved for session. -ignore-geometry-prefs
Don’t save session’s window geometry -dont-save-geometry-prefs
Ignore the alwaysraised, alwaysopened,
and z-lock attributes of JavaScript
window.open().
-dont-force-window-stacking
Show only the component bar. -component-bar
Specify geometry (default is
620×950+630+40)
-geometry =WxH+X+Y

Some UNIX workstations only support low color depths. Netscape offers some options
that make it easier to work on those platforms:

Install a private color map -install
Use the default color map -no-install
Set the maximum number of colors to
allocate for images.
-ncols [number]
Force 1-bit deep image display -mono

The coolest feature, in my opinion, of Netscape on UNIX is the ability to control via the
command line. You can open web sites, open files, save sites and files, add new
bookmarks, and other things all from the command line and all on the currently running
copy of Netscape. The following site:

http://home.netscape.com/newsref/std/x-remote.html

Contains the details needed to use the remote control functionality.

10. Resources

Configuring Apache

How to install and configure Apache with custom options, including enabling SSL, CGI, SSI, and FrontPage Server Extensions.

Table of Contents

1. Basic Configuration

1.1. Install Apache

This is out of scope here, go to http://httpd.apache.org for help. I would recommend not using an RPM/deb/whatever for Apache. My philosophy is roll your own Apache, Perl, and kernel, always. If you want to

install mod_perl or mod_ssl or the like, and you really don’t feel comfortable trying to compile it all yourself, then you may want
to try Apache Toolbox, at http://www.apachetoolbox.com .

For the rest of the document, I’m going to assume that you
have Apache installed in /usr/local/apache , your docroot in /home/httpd/html , and your cgi-bin in /home/httpd/cgi-bin

1.2. Preliminary Setup (httpd.conf)

Apache comes almost ready to use after installation. I would recommend that you go over the config
file, /usr/local/apache/conf/httpd.conf , before firing it up the first time. The file is
extremely well-documented, and you shouldn’t have any problems as long as you take time to practice
the basic skill of reading . Nevertheless, I’ll go ahead and and explain the layout a little
bit, and list some of the things I personally had to change.

1.2.1. File Layout

This is explained at the top of the file. There are 3 basic sections to the httpd.conf file, as
follows:

  1. Options which modify the behavior and operation of the whole Apache server, aka the ‘global
    environment.’
  2. Options which set the behavior of the ‘default server.’ Things like security, file access,
    document sources, CGI settings, etc. are configured here. For a basic server, this is all you’ll
    need. Options in this section will also be the default options for virtual servers. More on those
    later.
  3. Settings for virtual servers. Explained in section 2 .

1.2.2. Resource & Access Config

There are two directives, ResourceConfig and AccessConfig which basically aren’t
used anymore. The files to which they point default to being empty, and should probably stay
that way. If you’re going to be using the FrontPage extensions, set the options like this:
ResourceConfig /dev/null
AccessConfig /dev/null

1.2.3. Extended Status

You may find it helpful to find out what’s going on with your new server. Apache provides
a special URL to help you with this, /server-status . To have it show you more information:ExtendedStatus On

1.2.4. Port Number

To aid me in writing this document, I did an install of Apache 1.3.23. I don’t know how long
it has been this way, but apparently Apache now defaults to running on port 8080. This just isn’t very nice at all…
Port 80

1.2.5. User Setup

Apache needs to run as an unprivileged user on your system. RedHat-type systems come pre-configured
with the nobody user. I’m not sure about anything else, but it seems like Debian may have
a www user…?
User nobody
Group nobody

1.2.6. Server Admin

The email address of the server’s administrator.
ServerAdmin admin@domain.com

1.2.7. Server Name

This needs to be the primary resolvable address for your website.
ServerName www.domain.com

1.2.8. Allow Override

See .htaccess, section 4 .

1.2.9. Document Root

This specifies the default directory where your html files and whatnot are pulled
from. Since I’m assuming the directory /home/httpd/html ,
DocumentRoot "/home/httpd/html"

1.2.10. Directory Sections

Apache has sections of its config file inside of <Directory> </Directory> tags.
These are for setting options on indexing, execution permissions, access permissions, etc.
on certain directories. For instance:
<Directory "/home/httpd/html">
Options Indexes FollowSymLinks ExecCGI Includes
AllowOverride All
Order allow,deny
Allow from all
</Directory>

Another directory section you may want to modify is the UserDir directory.
This is for functionality like on prism, where everything in your ~/public_html directory
will be served from a URL like http://www.prism.gatech.edu/~gte000a . By default this section
is commented out. If you uncomment it, users will be able to serve web pages. If you have
a few friends as users that you trust, you may want to give more lax permissions:
<Directory /home/*/public_html>
AllowOverride FileInfo AuthConfig Limit Options
Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec ExecCGI
<Limit GET POST OPTIONS PROPFIND>
Order allow,deny
Allow from all
</Limit>
<LimitExcept GET POST OPTIONS PROPFIND>
Order deny,allow
Deny from all
</LimitExcept>
</Directory>

1.2.11. Directory Index

When a directory is specified in the URL instead of a page, this directive comes into
play. Apache will look for files listed here, in this order. If it finds one, it
sends this page by default. If it doesn’t find one, and Options Indexes applies
to this directory, it will display a listing of all files (except those explicitly
hidden in the next section.
DirectoryIndex index.htm index.cgi index.html index.php index.php3 index.pl

1.2.12. Files Sections

These tags let you set access rights on specific files. For instance, CGI authors
often have a file with, say, usernames and passwords for a database that must be
accessed for the CGI executable. These can be protected from users on the system
by chown nobody.nobody access.conf; chmod 600 access.conf , but this doesn’t
keep somebody on the web from clicking the file or typing the name into the URL.

<Files ~ “^\.ht”>
Order allow,deny
Deny from all
</Files>
<Files “*.inc”>
Order allow,deny
Deny from all
</Files>
<Files “*.conf”>
Order allow,deny
Deny from all
</Files>

1.2.13. Add Handler

Inside of the <IfModule mod_mime.c> directive, there lives AddHandler statements.
These are useful for SSI (Server-Side Includes) and CGI.

I’m not going to go over what these are, you get to figure that out for yourself. In my
setup, I have Perl and Python files recognized as CGI scripts, and basically all HTML
files are parsed for SSI.

AddHandler cgi-script .pl
AddHandler cgi-script .py
AddHandler server-parsed .shtml
AddHandler server-parsed .html
AddHandler server-parsed .htm

1.2.14. Location Sections

Basically the only <Location> section I find useful is /server-status .
It shows you server uptime and recent requests, among other things. If you’d like
to use it, uncomment the section out in the file, and add locations where you’d like
to access it from.

<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 128.61.63.84
Allow from localhost
Allow from billy
Allow from 128.61.63.88
</Location>

1.3. Starting it Up

And here’s the final (and easiest) step in the process. Simply run
/usr/local/apache/bin/apachectl start
to get it going. Point your browser to http://localhost/ and cross your
fingers…

2. Virtual Servers or Hosts

Virtual Servers or Virtual Hosts is a method by which you can run multiple websites
with one instance of Apache on one machine. You can do this by using different IP
addresses or different port numbers, the easiest and probably most common way is
by using name-based virtual hosts. What this means is that Apache looks at what domain
name was used to reach the web server and chooses different content/configuration based on that.

You can see full documentation at http://www.apache.org/docs/vhosts/ .

If you want to use Virtual Hosts, you must enable it:
NameVirtualHost *:80
Like it shows in the configuration file, almost any option can be overridden for a
virtual server. Probably the simplest Virtual Host would just have a different DocumentRoot directive. I’ll list an example or two from my setup:
<VirtualHost *:80>
ServerName maes.progoth.com
DocumentRoot “/home/httpd/html/maes/”
AddHandler cgi-script .pl
<Directory “/home/httpd/html/maes”>
Options Indexes FollowSymLinks ExecCGI Includes
AllowOverride All
Order allow,deny
Allow from all
</Directory>
ScriptAlias /cgi-bin/ /home/httpd/html/maes/cgi-bin/
</VirtualHost>
<VirtualHost *:80>
ServerName www.mcsweetie.com
ServerAlias *.mcsweetie.com mcsweetie.com
ServerAdmin m8s_in_liver@yahoo.com
DocumentRoot “/home/bob/mcsweetie.com/”
<Directory “/home/bob/mcsweetie.com”>
Options Indexes FollowSymLinks ExecCGI Includes
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

Make sure to have a virtual host that is your default server. Since you’ve probably
already listed default options, all you’ll need to have in this Host is ServerName and ServerAlias .

Keep in mind that you can’t just make up names and stick them here and expect them to
work, you have to have a DNS entry for the domains pointing to your IP.

3. SSL

I’m not going to explain what SSL is. You can read more than you want to know at http://www.modssl.com . Well, I hear that there’s other implementations of
SSL for Apache, but from everything I hear, just use mod_ssl.

3.1. Server Certificate

Before you can run your server, you’ll need to create a server certificate. You can
find everything you need to know at http://www.modssl.com/docs/2.8/ssl_faq.html ,
but I’ll give a quick rundown here. You’ll need to have OpenSSL installed, and a tool from mod_ssl.

Create a /cert directory in /usr/local/apache . Make sure only root has
permissions to the directory (700). The openssl executable should probably be
in your path.

openssl genrsa -des3 -out server.key 1024
openssl rsa -noout -text -in server.key
openssl rsa -in server.key -out server.key.unsecure
openssl req -new -key server.key -out server.csr
openssl req -noout -text -in server.csr

This creates a server key. You’ll need to replace your server.key with the server.key.unsecure
if you don’t want to be asked for your password every time Apache starts up. The next step
is to create a “Certificate Authority” to sign your key with.

openssl genrsa -des3 -out ca.key 1024
openssl rsa -noout -text -in ca.key
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

Next, according to the mod_ssl FAQ, there’s a shell script in the /contrib directory of the mod_ssl distro.

/path/to/modssl/contrib/sign.sh server.csr

You can now copy the server.crt et. al. to the appropriate directories as
listed in section 3.3 . Make sure those directories are only readable
by nobody.

3.2. Random Directives

There’s a couple of different options that you must set throughout the http.conf file to enable SSL. Usually they’re by another similar rule.

3.2.1. Load Module

There’s a place in the conf file for LoadModule directives.

<IfDefine SSL>
LoadModule ssl_module
</IfDefine>

3.2.2. Port

There should already be a line in the file with Port 80 . We want to make
it listen on port 443, which is the standard SSL port. Any port may be used, in
the same way that unencrypted HTTP traffic can be on any port.

Port 80
<IfDefine SSL>
Listen 80
Listen 443
</IfDefine>

3.2.3. Name Virtual Host

I’ll explain this in the Virtual Host section.
NameVirtualHost *:443

3.2.4. Add Type

Apache needs to know about some of the file types associated with SSL operation.

<IfDefine SSL>
AddType application/x-x509-ca-cert .crt
AddType application/x-pkcs7-crl .crl
</IfDefine>

3.2.5. Module Options

The mod_ssl module has to have some things told to it about files and whatnot.
This can basically go anywhere in the global or default server config, as long
as it’s after the LoadModule ssl_module statement.

<IfModule mod_ssl.c>
SSLPassPhraseDialog builtin
SSLSessionCache dbm:/usr/local/apache/logs/ssl_scache
SSLSessionCacheTimeout 300
SSLMutex file:/usr/local/apache/logs/ssl_mutex
SSLRandomSeed startup file:/dev/urandom 512
SSLRandomSeed connect file:/dev/urandom 512
SSLLog /usr/local/apache/logs/ssl_engine_log
SSLLogLevel info
</IfModule>

3.3. Virtual Host

The final step in setting up your SSL server is a virtual host directive.
The port that Apache is serving encrypted data through is just like any other
virtual server, and therefore can be given any options, such as a different
document root. For instance, on my server, the only thing I need encrypted is
the web-based access I provide to GaTech mail. Therefore my SSL server is
limited to the /mail subdirectory of my main document root.

<IfDefine SSL>
<VirtualHost *:443>
DocumentRoot “/home/httpd/html/mail”
<Directory “/home/httpd/html/mail/”>
Options Indexes FollowSymLinks ExecCGI Includes
AllowOverride All

Order allow,deny
Allow from all
</Directory>
ServerName www.progoth.com
ServerAdmin admin@progoth.com
ServerAlias progoth.com progoth
ErrorLog /usr/local/apache/logs/error_log
TransferLog /usr/local/apache/logs/access_log

SSLEngine on
SSLCipherSuite ALL:!ADH:!EXP56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile /usr/local/apache/conf/ssl.crt/server.crt
SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/server.key
SSLCACertificatePath /usr/local/apache/conf/ssl.crt
SSLCACertificateFile /usr/local/apache/conf/ssl.crt/server.crt
SSLVerifyClient none
<Files ~ “\.(cgi|pl|shtml|phtml|php|php3?)$”> %$

SSLOptions +StdEnvVars
</Files>
<Directory “/home/httpd/cgi-bin”>
SSLOptions +StdEnvVars
</Directory>
SetEnvIf User-Agent “.*MSIE.*” nokeepalive ssl-unclean-shutdown \

downgrade-1.0 force-response-1.0
CustomLog /usr/local/apache/logs/ssl_request_log “%t %h \
%{SSL_PROTOCOL}x %{SSL_CIPHER}x “%r” %b”
</VirtualHost>
</IfDefine>

3.4. Starting Up

Did you notice all the <IfDefine SSL> statements? Those are just like preprocessor
statements in, say, C. The code inbetween the opening and closing <IfDefine> ’s is only
executed if it has been told to. When mod_ssl is patched into Apache, it creates a new
option in apachectl , startssl .

/usr/local/apache/bin/apachectl startssl

Keep in mind that running in with the restart option doesn’t seem to enable SSL, so
you’ll need to run stop then startssl . This might have been changed sinceI’ve tried it, though.

4. .htaccess

The .htaccess file is a simple mechanism for setting options in specific directories.
In effect, it can override practically any settings from the httpd.conf file, which is where
the AllowOverride directive comes in. The two I find useful are Options and AuthConfig .

4.1. AllowOverride Options

I find this the most useful when I’m dealing with directories full of images. A common
occurance is for a user to go to a directory of images and see a whole listing, instead
of only the images the author wants to post. A common way to get around this is to
put an empty index.html file in the directory. A way I like is to allow options
to be set in a .htaccess file, and put a .htaccess file in the directory with this
line:

Options -Indexes

4.2. AllowOverride AuthConfig

With AuthConfig you can use the .htaccess file to password protect directories.
Not only does it restrict access, but it also sets a nice REMOTE_USER variable that
is oh-so-handy in CGI programming…

The .htaccess file looks something like so:

AuthType Basic
AuthName “Administration”
AuthUserFile /home/httpd/.htpasswd
AuthGroupFile /home/httpd/.htgroups
require group admin

The files can be named anything you want. More on those in a moment.
The require statement is fairly flexible. In this example I’m requiring a user
that is in the “admin” group. Other valid directives might include:

require valid-user
require user Billy

4.2.1. .htpasswd

The .htpasswd file is a simple file. It is a list of usernames and
passwords separated by a colon, one username and password per line. The password is a hash,
created with the standard unix crypt() function. Or, it may optionally
be an md5 hash, but I don’t know anything about that. There’s a program in the
Apache /bin directory called htpasswd to help with creating/editing
these files. Run it with --help to see how to use it, basically
./htpasswd passwordfile username
adds a new user. The -c option will create a new file. The -b option lets you specify a password after the username on the commandline.

4.2.2. .htgroup

Not a whole lot to tell here. Contents of the file:
groupname: username anotheruser
anothergroup: admin auser

5. FrontPage Server Extensions

The FrontPage extensions can be a life saver if you need to host less technically-inclined
users. Even users who don’t want to use the cheesy FrontPage themes or “Web Objects” or
whatever can be helped a great deal by the simple publishing method (which is actually
built on DAV, an open standard…the #1 signal that FrontPage was consumed by Microsoft,
not developed by them).

The software and manuals are available at http://www.rtr.com/fpsupport/ . It’s a fairly
easy process, so I’m not going to spend a lot of time on it.

I’m going to assume you’re not using the version of Apache with the FrontPage extension
patched in.

The first step is to download the extensions, create /usr/local/frontpage , and
untar fp40.linux.tar.Z into /usr/local/frontpage . You’ll notice 4.0 isn’t the
latest version; given Microsoft’s less-than-stellar security track record, I decided
to stick with the FP2000 extensions. Next, do the following (ripped straight from
the Installation FAQ):

cd /usr/local/frontpage
ln -s version4.0 currentversion
cd currentversion/bin
fpsrvadm.exe -o install -p 80 -servconf /usr/local/apache/conf/httpd.conf
fpsrvadm.exe -o chown -xUser nobody -xGroup nobody

Then restart Apache.

This installs the FrontPage extensions on your main server. The only problem I ran
into was that I had an /admin directory in my main server which was protected
by a .htaccess file. For some reason FrontPage didn’t like that, so I had
to rename the directory.

The FP extensions integrate nicely with VirtualHosts. To make a virtual host into a
FrontPage web, run this command:

fpsrvadm.exe -o install -p 80 -m vh.domain.com -xu nobody -xg nobody -username admin \
-password password -t apache -s /usr/local/apache/conf/httpd.conf

where -username is your FrontPage user id and -password is your
FrontPage user password, and -m is the virtual server you’re installing on.

The website I gave for the extensions have a lot of documentation if you’re having
problems. I found the setup and the username/password settings flaky and confusing,
and you may have to play around with the setup for a while, but once it’s working it seems
to be flawless.

6. Resources

Introduction to Security Basics

Table of Contents

1. Step 1: Eliminate Unneeded Programs

Identify programs that are running, particularly those that are
accepting connections from the network, and eliminate unneeded ones. To
see a list of open services and what processes are providing them, use:

netstat -tulp

Sample output:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local AddressForeign Address StatePID/Program
tcp00 mummu.localnet:ssh *:*LISTEN 7142/sshd
tcp00 *:time*:*LISTEN 157/inetd
tcp00 *:daytime*:*LISTEN 157/inetd
tcp00 *:discard*:*LISTEN 157/inetd
udp00 *:discard*:*157/inetd

Most of these fields are self-explanitory. The ‘Local Address’ field displays
what port and IP address the service is listening on; an IP of 0.0.0.0 means
that the service is listening on all valid IPs for this machine. The last
field identifies the process ID and name of the process that has this service
open. (note: only root will be able to see this information).
If you want to see the numeric IP addresses and port numbers, add ‘n’ to the
list of arguments to netstat. The complete listing of services that netstat
uses to apply these names is in /etc/services.

Notice that the process named ‘inetd’ is providing numerous services. inetd,
and its successor xinetd, are called ‘superservers’. Their role is to listen
on numerous ports on behalf of a variety of simple services, set in
/etc/inetd.conf and /etc/xinetd.conf respectively. When a connection comes
in on one of these ports, (x)inetd will fire up the appropriate server and
hand off the connection to it, and in most cases the server will handle that
one connection and then quit.

Sample inetd.conf entries:

discardstreamtcp nowaitrootinternal
discarddgramudp waitrootinternal
#ircdstreamtcp waitroot/usr/local/sbin/ircd ircd -i

These entries describe the following services:

  1. listen on the discard tcp AND udp ports (found by looking in
    /etc/services). When a connection comes in, use the internal handler
    (built into inetd) for this protocol.
  2. (commented out – inactive) listen on the ircd tcp port. When a connection
    comes in, run /usr/local/sbin/ircd -i, with the privileges of the user ‘root’

The xinetd config file works in a similar way, but allows slightly more
flexibility in the specification of the service.

Both inetd and xinetd utilize ‘tcp wrappers’, a very rudimentary but useful
security tool written for Linux by Wietse Venema. This system uses information
in the files /etc/hosts.allow and /etc/hosts.deny to allow or block services
to clients based solely on IP address (this system is weak since IP addresses
can easily be spoofed, but it can defeat unsophisticated attacks). Newer
versions of tcp wrappers have a semi-complex grammar with which to express
who is allowed to connect to what services, so read

man hosts_access

to learn how to configure these filters.

You probably have a service called ‘portmap’ running on your system. This is
an extremely simple service that is a sort of registry server for programs
that use Sun’s RPC (remote procedure calling) network interface. The most
common end-user services that need portmap/RPC to be running are NFS and NIS,
which are a networked filesystem and a networked login system, respectively.
If you are using neither of these (odds are you aren’t, and if you want to run
a secure system, you should probably not be anyway) then you can safely disable
portmap.

Another port that is commonly open that does not need to be is the X-window
server. This will be a tcp port in the range of 6000+. It is only needed
if you run X processes on remote machine that display to the local machine,
and this is better done with SSH port forwarding anyway. More commonly, it
gives an attacker an easy way of snooping on what you are doing in X,
including keystroke logging, or popping up random windows on your desktop.
If you decide you don’t need this port to be open, add the option

'-nolisten tcp'

to your X-server startup params. If you use startx to fire up your X
desktop, this will be located in

/etc/X11/xinit/xserverrc

If you use one of the graphical login managers (xdm, gdm, or kdm are most
common), then the parameters will be found in the config files for
whichever one you use, probably under /etc/X11 somewhere.
If you decide you DO need to have remote X capability, then learn to use the
xhost command to set who can connect to your server by IP-only authentication.
For local users sharing an X-server (like when you startx as your normal user
but then su to root.) it’s better to copy the .Xauthority token out of the
user’s home directory who owns the server, into the home of whomever needs
access to the server.

2. Step 2: SSH – The Secure Shell and so much more

2.1. What is SSH?

SSH is a flexible set of protocols for encrypting interactive and bulk
traffic between two hosts. SSH is the preferred interactive login
utility on Linux and UNIX systems. Its predecessor, telnet, is
shunned because it sends passwords over the network in plaintext. So,
anyone who sees the traffic between you and your destination system
can easily grab your passwords and watch everything you do.

Typically, one of the first steps in securing a Linux system is to
diable telnet access and enable SSH access. In modern Linux
distributions this may be done for you. Also, the ssh server is not
typically run under inetd, although that is certainly possible. See
previous sections for information on integration into a secure system.

The preferred software for ssh and its associated sever, "sshd", is
called OpenSSH. It is developed by the OpenBSD group and is ported to
many other UNIX variants including Linux. OpenSSH currently supports
all of the features supported in the SSH1 and SSH2 protocols
including, secure port forwarding, X Window server proxying, and
ssh-key fowarding and authentication to name a few.

2.2. The Basics of How SSH works

SSH uses two forms of encryption through the lifetime of a connection,
symmetric to encrypt the traffic, and asymmetric to exchange the
symmetric keys. When one sees messages about "host keys" this is
referring to the permanent (asymmetric) public and private keys.
These should not change for any given host. I’ll spare you the
details on how this works, but just know that in order to talk to a
ssh server, you must have it’s public (host) key. Usually, ssh will
ask you if you trust a new host key when you first try to connect. It
is referring to the host public key in either RSA or DSA key format.

When one connects to a new server, the server’s public key must then
be transmitted to you. This is where ssh is most vulnerable to
outside attack since you are assuming that you are talking to the
correct machine the first time. If this assumption is not correct,
then someone else has conducted what is called a "man-in-the-middle"

attack, where they sent you their public key instead of the server you
wanted to communicate with and can from there possiblly obtain your
password and monitor your session. This is why it is important to
keep your host keys safe and verify public keys when you first connect
to a new server.

Also, I must note that SSH protocol 2 is far more resistant to
potential attacks than SSH protocol 1 and should alway be used. In
OpenSSH the "-2" switch will force version two, also you may edit your
configuration files to make it default.

2.3. How to Move Files

Probably the second most used feature in SSH is secure copy or "scp"

for short. You can use scp to move files between two hosts and have
the transfer be encrypted. The syntax of the command is as follows:

(case 1, moving a local file to a remote computer)

scp /tmp/local_file sonny@acme.gatech.edu:/home/gte000/

Notice the user@hostname:path format in the second argument this tells
scp what machine to connect to, what user to authenticate, and where
to put the destination file.

(case 2, moving a remote file to the local machine)

scp sonny@moefo.net:/home/sonny/blarg /tmp

The syntax is very similar to the first case, use the same format for
the remote machine.

2.4. For More Information

For more detailed information on SSH see Moshe Jacobson’s presentation
at the LUG website.

http://www.lugatgt.org/articles/using_ssh/

of course "man ssh"

3. Step 3: Logs

Linux has a very flexible, simple system for handling system event logs.
The process ‘syslogd’ collects messages from all other processes on the
system, usually through the special file /dev/log, and decides how to
handle them based on the configuration in /etc/syslog.conf. Syslogd’s
companion program, klogd, snarfs up messages that are printed out by the
kernel and feeds them to syslogd.

There are two values that syslogd uses to determine how to handle a
particular message: that message’s "priority" and "facility". The
facility of a message specifies what type of event it is reporting;
for instance, kern is the facility of all messages from the kernel,
mail from the mail system, lpr for the printing system and so on.
The priority of a message is a relative measure of how severe the
message is, ranging from ‘debug’ for relatively trivial messages to
‘panic’ for serious problems. Common handling of these messages is
to do one or more of the following:

  1. Write the message to a file in /var/log. /var/log/messages is a
    popular place to stuff the bulk of messages an admin will typically
    want to see, /var/log/syslog is also a likely place to look.
  2. Write the message to a particular user’s terminal. This is typically
    reserved for serious, immediate issues that need prompt attention
  3. Send the events to a syslogd running on another machine. This allows
    very easy centralized log-reading for large numbers of machines.
    Unfortunately, the messages are not encrypted with the normal Linux
    syslogd, so this is inadequate for the paranoid among us. Smile
    There are add-on syslogd replacements that provide this feature.

It is also trivial to pipe the text of the message to another program
for handling; a common use is to email critical messages to the admin
or even to send the text to a pager.

man syslog.conf

for information on how to tune all this.

4. Step 4: Firewalls

A firewall is a list of rules that determine what packets are and are not
allowed to enter, leave, or pass through your system. In Linux, the firewall
system to use is called ‘iptables’. iptables divides packets into INPUT
(those coming from the outside into your system), OUTPUT (those coming from
your system bound for the outside), and FORWARD (those passing through your
system, using it as a router) chains, and each chain has its own set of rules.
If you have NAT (network address translation) support in your kernel, then you
also have chains called PREROUTING and POSTROUTING, which are the first to
affect any packets coming into your system, and the last to affect any going
out, respectively. One of the big improvements of iptables over its
predecessor, ipchains, is ‘statefulness’. That means that it logically
groups all packets into ‘sessions’, so you don’t have to worry about allowing
response packets going in the opposite direction if you have already allowed
the beginning of the session, for instance. All you have to do is specify
what types of sessions you want to be allowed through.
(technically, iptables marks each packet as either NEW, ESTABLISHED, RELATED,
or INVALID depending on how it relates to preexisting sessions).

Without further ado, here are some examples of iptables scripts.

Basic firewall, allow anything going out, but nothing coming in:

#!/bin/bash

PATH=/usr/sbin

# These three lines flush all the old rules out
# of the iptables system, resetting it to its initial
# state.
iptables -F
iptables -X
iptables -Z

# These three lines set the default rule (or 'policy') to simply
# throw away any packets that aren't dealt with explicitly by
# another rule
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

# This line says to immediately accept any packets that are
# part of a preexisting session
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# These two lines log and drop anything that is trying to look
# like part of a session, but is invalid for some reason.This
# is something that shouldn't happen very often

iptables -A INPUT -m state --state INVALID -j LOG \

--log-prefix "Invalid: "
iptables -A INPUT -m state --state INVALID -j DROP

# After this point, we don't waste time checking the state
# anymore, because the only state left is NEW.And, since we
# want to allow nothing on the inbound...

iptables -A INPUT -j LOG --log-prefix "Input: "
iptables -A INPUT -j DROP

# We allow anything outgoing (we trust ourselves and our users,
# and aren't terribly worried about trojans/virii hijacking
# our systems.)

iptables -A OUTPUT -j ACCEPT

# We don't let anybody use us as a router (yet)

iptables -A FORWARD -j LOG --log-prefix "Forward: "
iptables -A FORWARD -j DROP

# -=-= END OF SCRIPT =-=-

Basic example of having a natted subnet (192.168.0.0/24) behind this
machine which is the router:

#!/bin/bash

PATH=/usr/sbin:/bin

# We now need to specify the internal and external cards
EXT_CARD=eth0
INT_CARD=eth1

# All our internal machines will appear to the world to come from
# our one real IP, so we specify that as well
EXT_ADDR=128.121.10.11

# This is important, and a common source of errors; by default,
# Linux forwards nothing.Flipping this bit makes it forward
# everything.We refine below ;-)

echo 1 /proc/sys/net/ipv4/ip_forward

iptables -F
iptables -X
iptables -Z

# We'll be using the nat table, so clear it out too.
iptables -t nat -F
iptables -t nat -X
iptables -t nat -Z

iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state INVALID -j LOG \

--log-prefix "Invalid: "
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -j LOG --log-prefix "Input: "
iptables -A INPUT -j DROP

iptables -A OUTPUT -j ACCEPT
# We now need to split forwarded packets into incoming, and
# outgoing, based on what cards they are coming from and bound
# to.Any packet that tries to go out the same card it came
# in is probably up to something (trying to use us to spoof) so
# log and drop those

iptables -N INCOMING# Create a new chain named INCOMING
iptables -N OUTGOING
iptables -A FORWARD -i ${INT_CARD} -o ${EXT_CARD} -j OUTGOING
iptables -A FORWARD -i ${EXT_CARD} -o ${INT_CARD} -j INCOMING
# Split passing-through packets based on what cards they are
# coming from and going to.
iptables -A FORWARD -j LOG --log-prefix "Forward: "
iptables -A FORWARD -j DROP

iptables -A OUTGOING -j ACCEPT

iptables -A INCOMING -j LOG --log-prefix "Incoming: "
iptables -A INCOMING -j DROP

# This line's the clincher:for anything that's leaving, use
# Source NAT to set the source IP address to our real IP, so
# that responses get back to us.

iptables -t nat -A POSTROUTING -o ${EXT_CARD} -j SNAT \

--to-source ${EXT_ADDR}

# -=-= END OF SCRIPT =-=-

To allow incoming www requests to our real IP to go to the server running on
an internal machine with the IP 192.168.0.100, first put a line in PREROUTING
to change the packet’s destination IP address before the FORWARD chain looks
at it:

iptables -t nat -A PREROUTING -i ${EXT_CARD} -p tcp --dport www \
-j DNAT --to 192.168.0.100

and then allow the packets through:

iptables -A INCOMING -p tcp --dport www -d 192.168.0.100 \

-j ACCEPT

To fool port-scanners into thinking that we’re not filtering at all, but
just that there’s nothing there, create a new chain called PISSOFF, and
make any packets sent there that we don’t like behave just as if there
was no filtering taking place:

iptables -N PISSOFF
iptables -A PISSOFF -p tcp -j REJECT --reject-with tcp-reset
iptables -A PISSOFF -p udp -j REJECT
iptables -A PISSOFF -j DROP

Then jump to PISSOFF instead of just DROP whenever you want to get rid
of a packet.

Transitioning from Windows to Linux

Table of Contents

1. Introduction

1.1. What is this all about?

The growth of Linux towards becoming a useful desktop platform over
the past years has brought many new users into understanding the
alternatives that exist in both operating systems and applications.
The much-advocated system of Open Source development to improve software
quality and collaboration has also made quite a splash, bringing in
developers who would otherwise have never dreamed of becoming involved
in such large, influential projects such as the Linux kernel, KDE,
Gnome, and Mozilla.

That’s all fine and good for software developers, but amid the talk
of releasing source code, arguing over architectural decisions and
which editor is the greatest in the universe, there are the cries of
the non-developers:

"What’s in it for me? How does this help me get
my job done?"

2. Why Linux / *BSD / Unix?

2.1. Customizability

Everybody uses a computer differently, so being able to adapt your
system to fit your needs is a valuable tool for getting work done
efficiently. When there is such a large selection of environments to
choose from (more on this later), the focus shifts from learning how
other people work to learning how you as a user works.

2.2. Experience

Mastery of a single particular system is admirable, but a worker
is always better off knowing what tools are available. Linux and the
applications that run on it are tools for getting your work done, and
it’s a good thing to know if it is a better tool for the job by experience
rather than a marketing brochure.

3. The User Interface

3.1. Choices

As mentioned earlier, Linux provides a dazzling array of ways to
interact with your system, from the simple text-only console, to
complete desktop environments such as
Gnome and
KDE. Graphical user interfaces are
referred to as "window managers", and are interchangeable. In particular,
Gnome and KDE are designed to be combinable with other window managers
to further fit your style.
While we will be focusing on
Gnome and KDE (Gnome in particular), there are many others available,
descriptions and screenshots of which can be found
here: http://www.plig.org/xwinman/.

3.2. Configuring the User Interface

In Gnome, most of the configuration options for changing the user
user interface of the default window manager (Sawfish), can be found
in the Gnome Control Center.

Tip:
Middle-clicking on the desktop and selecting
"Customize" is another method of changing Sawfish settings.

3.3. ‘Help!’ or, How To RTFM.

In Windows, standardized help is invoked via the F1 key. The standard
help system on Unix systems are "manual" (or simply, "man") pages,
commonly viewed via the man command. Both Gnome and KDE are
able to view manual pages graphically from the file manager (Nautilus
and Konqueror, respectively). To use this feature, just type
"man:(application)" in the location bar.

The web, as always, is a valuable resource for finding support
in addition to any help provided with the application. In particular,
if you are getting started with an application, search for the word
"HOWTO" in addition to you search terms.

4. Doing ‘Normal’ Things

4.1. File Management

Managing files in Gnome and KDE should feel familiar to users of
the Explorer in Windows, using the standard folder and icon interface.
The Gnome file manager is called Nautilus, and will be the focus of
this section. If you are using KDE, information on managing files
can be found
here.

4.2. Understanding Unix Paths

"If the path be beautiful, let us not ask where it leads."
– Anatole France

There are three major points to remember when it comes to Unix-style
filenames:

  1. The names of files and directories are case-sensitive.
  2. All paths start at the root directory ("/").
  3. Your personal files go in your "home" directory, which is usually
    "/home/yourname" and is often abbreviated as simply
    "~" (tilde).

Your hard drives, CD-ROM drives, and floppy drives are "mounted" to
become accessible. Mounting a drive simply means that it is associated
with a path, such as "/mnt/cdrom", from which you can access
the files on the drive.

4.3. Dot-Files (Hidden Files)

Unlike Windows, files cannot be marked "hidden". Rather, files which
begin with a period (often referred to as "dot-files") are treated as
hidden files, and are usually found in your home directory to store
personal settings for programs.

4.4. Understanding Unix File Permissions

All files on a Unix system have settings for permissions. There
are nine major permission settings which can be toggled on or off:
three categories (user, group, and other), each with three settings
(read, write, execute). Each file is also owned by a user and is
assigned to a group.

The permissions for a file are usually abbreviated as 9 characters
in a row, like this:

rwxrwxrwx

The first three characters are the permissions of the file as they
relate to the file’s owner (read, write, and execute). Read and write
are self-explanatory; execute means that the file can be run as a
program. The next three characters refer to the permissions relating
to the file’s group, and the last three refer to the permissions
in regard to all other users. If instead of a character there is a
hyphen ("-"), when that means the permission in that space is not
set.

Examples:

  • rwxrwxrwx: This means that the file is open to
    all users (all users can read and write this file). Also, all users
    can execute this file as a program.
  • rwxr-xr--: This means that the user who owns this
    file can read, write, and execute the file. Users who belong to
    the same group which the file is assigned to can read and execute
    the file. All other users can read, but not execute, the file.
  • rw-r-----: This means that the user who owns this
    file can read from and write to the file. Users who belong to the
    same group which the file is assigned to can read the file. All
    other users cannot read or write the file.

4.5. Finding Files

To search for files from Nautilus, select "Find" from the "File"
menu, or click on the "Find" button on the toolbar. The location bar
will be replaced with the Find bar, which will allow you to search for
files matching your criteria. For a more detailed search, click on
the "More Options" button multiple times.

4.6. Associating Programs to Files

In Nautilus, to associate a program with a particular file type, so that
double-clicking on icons of that type will launch the correct application,
first right-click on a file of that type, and select "Open With ->
Other Application…".

Next, select the application you wish to associate with the file type
in the list. If your application is not listed, you may add it in
the "File Types and Programs" section of the GNOME Control Center (click
on the "Go There" button in the dialog). In this case, the program we
want to use (NEdit) is already listed. Click the "Modify" button.

Finally, click on "Use as default".

4.7. Managing Music and Images

Nautilus automatically generates thumbnails for all image files,
and you can zoom in and out on the image by either using the zoom control
(next to the location bar), or by selecting "Stretch Icon" from the
right-click menu and stretching the icon larger or smaller.

Nautilus also allows you to preview and play MP3s from the file
manager. Hold the mouse over an audio file to preview the sound file
(moving the mouse away will stop the playback). Also, if you have a
directory of MP3 files, you can select "View as Music" to show the
built-in MP3 player. You can also use the method described in the
"Associating Programs to Files" section to
use XMMS or another MP3 player to
play the files.

5. Package Management

Installing Programs (or "Packages" as they are often called in the
*nix world) is handled in a multitude of ways depending on what
distribution or system you are using. Even between different
distributions there are several different package management systems
in use today. The most prevalent system is called RPM – Redhat
Package Mangager which is used on RedHat, Mandrake, and Suse (and
probably others). Beyond that, there is the Debian package management
system which has the famous "apt" – Advance Package Tool set of
scripts and the minimalist Slackware Package Management system.

5.1. RPM

From the man page:

rpm is a powerful package manager, which can be used to
build, install, query, verify, update, and erase individual software
packages. A package consists of an archive of files, and package
information, including name, version, and description.

RPM does package and architecture dependency checking for you and
maintains a database of what packages are installed. The manpage for
rpm is rather daunting, but thankfully there are a few simple
invokations that should get you through most situations.

Installing a new package: rpm -i foo-0.1.rpm

Upgrading an existing package: rpm -U foo-0.2.rpm
(more verbose version): rpm -Uvh foo-0.2.rpm

Removing a package: rpm -e foo

If console commands strike fear into your heart, don’t worry, most
RPM-based distributions come with their own GUI management system.
We don’t really know how well they work (they have generally not
been very high quality in the past).

Also of note, is Mandrake’s high-level "urpmi" system for
automatically fetching and installing RPM packages.

See man pages and how-to’s for more details.

5.2. Debian

Debian uses several systems in concert to handle package management.
Among them is the low-level dpkg command, the menu-driven dselect, and
the high-level apt system. Debian maintains a great deal of meta
information about packages such as dependencies, recommended packages
to install, and stability status.

For the simple tasks you will use either apt (and specifically the
"apt-get") command or dselect to manage packages on Debian. Apt-get
is a simple command line tool for package installation where you
specify installation, removal, or update of packages whose names you
know. Dselect is a menu-driven system where you can look at package
names, descriptions, dependencies, and what is currently installed.
Apt grabs packages from web-based central repositories of debian
packages, verifies their integrity, and installs them all automatically.

5.3. Slackware

Slackware packages are merely glorified tarballs with special files
that are executed after untaring the package. No meta information is
kept other than what files were installed where.

The main commands to know are installpkg, removepkg, and (possibly
deprecated) pkgtool. The syntax of installpkg and removepkg is fairly
straightforward. Installed package information is stored in
/var/log/packages as text files with all of the installed files.

5.4. Autoconf

GNU autoconf is not package management per-se, but is typically how
raw source packages are configured and then subsequently built with
make when packages are not available or one is intentionally avoiding
using the package manager.

Compiling and installing packages from the authors is not terribly
difficult when autoconf is used to configure it. It basically boils
down to three easy steps:

./configure
make
(as root)make install

Remember to cd into the directory where the program untared

Sometimes, you will have to pass options to the configure script and
you can determine what options are available by typing:
./configure --help

Usually, this system works fairly well, but in the case where it does
not, it can often be a rather complex task to assure that everything
is working correctly, and you should try and find packages if
possible.

6. Whizzy Stuff

6.1. Changing the Desktop Wallpaper

To change the background image of the desktop when using Nautilus,
right-click on the desktop and select "Change Background Image" from
the menu.

6.2. Changing the Toolkit Theme

You can change the appearance of buttons, scrollbars, and other elements
of certain applications. The part of an application which draws the
on-screen components is called the "toolkit", and there are a number of
different toolkits (such as GTK+, Qt, Motif, XForms, and Tk) used by
different applications. Unfortunately, since not all applications use
the same toolkit, changing the toolkit theme is different for each
application, and some toolkits are very limited in what can be
customized.

Gnome uses GTK+, as do applications written for Gnome. To change
the GTK+ theme, open the Gnome Control Center and choose the
"Theme Selector" (in the "Desktop" section). A number of themes are
bundled with Gnome; others can be downloaded from
http://gtk.themes.org/ and installed using the
"Install new theme…" button.

Below are examples of three different GTK+ themes:
(Click on a thumbnail to view a full screenshot).

Metal Theme Screenshot
Metal Theme
SatinBlack Theme Screenshot
SatinBlack Theme
by Nakitoma Mitsuoni
CurlyMonster Theme Screenshot
CurlyMonster Theme

by CurlyMonster.

Note: The CurlyMonster screenshot is also using the
windowsXP Sawfish theme by
Patrick McDermott.

The GTK+ CurlyMonster Theme can be downloaded from:
http://www.themes.org/resources/542/.

The SatinBlack Theme does not appear to be available anywhere
anymore :(.

Using Procmail

Table of Contents

1. Concepts

1.1. What is Procmail?

Procmail is a utility that allows you to filter your incoming mail according
to pretty much any parameters you choose. In its most simple usage, it filters
based on pattern matching. A procmail "recipe" is basically a set of regular
expressions that should or should not be matched in an incoming email, along
with a folder to place the email into if it is matched.

Some people (like me) have multiple email addresses that are all handled by
the same server, and they want to keep the emails separate from each other for
one reason or another. Procmail will allow you to filter into different
inboxes based on which address an email was sent to.

1.2. What are Procmail’s capabilities?

Procmail can filter mail using several methods:

  • Pattern matching against a fixed regular expression
  • Negating a pattern match
  • Piping the email to a command and taking its exit status

    (If the command exits with a failure exit code, the recipe conditions are
    not satisfied)
  • Checking the length of the email

Procmail can take one of three actions with the text of an email once it has
determined the email to match the recipe specification:

  • Append the text to a file
  • Feed the text to the standard input of another program
  • Forward the email to another address
  • Process the text of the email using an external program

    The modified text will now be considered as if it were the original email,
    and Procmail will continue to test this text against the rest of the
    recipies.

The first three rules are delivering recipies, whereas the latter is
a non-delivering recipe. Non-delivering recipies are useful for
modifying the content of an incoming email before it’s delivered.

2. Basic Configuration

2.1. Setting up your .procmailrc

The file that will contain all your procmail recipes is
~/.procmailrc. Most systems are already configured with sendmail
or another MTA that is procmail-aware, but just to make sure your mail
actually gets filtered through procmail rather than directly delivered, you
must place the following line in your ~/.forward file:

|/usr/bin/procmail

That should be all you have in the file. Any special forwarding you want to do
can be done from within the procmailrc file. Of course, if procmail doesn’t
live in /usr/bin on your system, correct the path appropriately.

2.2. Global procmailrc settings

There are some variables that can be set from within the .procmailrc, whose
values determine specific behavior of procmail while delivering your mail.
Here is a list of the more commonly used ones. Don’t get overwhelmed. Most of
these have reasonable default values:

MAILDIR
Current directory while procmail is executing (that means
that all paths are relative to $MAILDIR).

DEFAULT
Default mailbox file (if not told otherwise, procmail will
dump mail in this mailbox). Procmail will automatically use
$DEFAULT$LOCKEXT as lockfile prior to writing to this mailbox. You do
not need to set this variable, since it already points to the standard
system mailbox.

LOG
Anything assigned to this variable will be appended to $LOGFILE.
Useful for debugging purposes.

ORGMAIL
Usually the system mailbox (ORiGinal MAILbox). If, for some
obscure reason (like `filesystem full’) the mail could not be delivered,
then this mailbox will be the last resort. If procmail fails to save the
mail in here (deep, deep trouble :-), then the mail will bounce back to
the sender.

TRAP
When procmail terminates it will execute the contents of this
variable. A copy of the mail can be read from stdin. Any output
produced by this command will be appended to $LOGFILE. Possible uses
for TRAP are: removal of temporary files, logging customised abstracts,
etc.

INCLUDERC
Names an rcfile (relative to the current directory) which
will be included here as if it were part of the current rcfile.
Nesting is permitted and only limited by systems resources (memory and
file descriptors). As no checking is done on the permissions or ownership
of the rcfile, users of INCLUDERC should make sure that only trusted
users have write access to the included rcfile or the directory it is in.

DROPPRIVS
If set to `yes’ procmail will drop all privileges it might
have had (suid or sgid). This is only useful if you want to guarantee
that the bottom half of the /etc/procmailrc file is executed on behalf of
the recipient.

See the procmailrc manpage for the default values assigned to these variables.

2.3. The format of a procmail recipe

A procmail recipe takes the following format:

:0 [flags] [ : [locallockfile] ]
* condition-regex1
* condition-regex2
exactly one action line

There can be 0 or more condition regexes, each preceeded by a *. The
conditions are ANDed, so if a message does not match them all, it will not
match the recipe.

By the way, the 0 following the : that marks the start of a
new rule has no significance. In older versions of procmail, the digit was
used, but due to changes in the functionality of procmail, it has become
obsolete. Therefore, we always use 0.

The condition REs can be preceeded by a ! (negate the RE), ?

(Use the exit code of the specified program), , or >
(Ensure that the message size is less than or greater than the number of bytes
specified as the expression).

2.4. Recipe flags

There are several flags that can be placed after the :0 that affect the
behavior of the recipe. When no flags are specified, a default flag set of
"Hhb" is assumed. Any combination of the following flags can be specified, and
the new flag set will override the default flag set. Here is a list of
commonly used flags:

H
Egrep the header against the specified REs (default)
B
Egrep the body against the specified REs
D
Be case sensitive (default is case insensitive)
h
Feed the mail header to the action (default)
b
Feed the mail body to the action (default)
f
Filter program will modify the mail’s text
c
Send mail to this rule, but continue through procmailrc

The last option, c, is used on rules that do things such as send a response to
the email, or send a carbon copy of the email to another address, for example.

2.5. Types of patterns to look for

To filter a message by sender, use a rule such as:

* ^From .*sender@somedomain

Note that there are two From fields in a typical email header. The
one with no colon following it is where the mail really came from,
wheras the one with the colon after it is the address that the sender claims
to be sending from.

To filter a message by recipient, use a rule such as:

* ^To: .*your@address

Subjects also have some good information you may want to use to filter. See my
example procmailrc file for different ways I egrep the Subject.

3. More advanced recipe tricks

3.1. Multi-tiered recipies

If you have one recipe that is a base requirement for many other recipes, you
can nest them. Say you have two email addresses and you want to filter
differently for each:

:0
* To:.*joe.blow@somecompany.com
{
:0
* From:.*boss@somecompany.com

urgent-mail

:0
INBOX
}

:0
* To:.*jblow@home.com
{
:0
* From:.*boss@somecompany.com

/dev/null

:0
INBOX
}

3.2. Formail

formail is a program that takes mail headers on standard input and
modifies them in some way and prints them back onto standard output. This is
useful for auto responders that respond using the same subject as the email
you were initially sent. See my procmailrc below for an example.

4. Some Examples

4.1. Generic Examples

The best place to look for examples is the procmailex(1) manpage. It has a
much more complete set of examples than I can put in this document.

4.2. My procmailrc

Here is my procmailrc. Mail to my runslinux.net address comes into the same
box as my jehsom.com address, and I’d like to treat them differently. Sorry
the comments are sparse and the file isn’t cleaned up as much as I would have
liked, but I’m short on time, so it’s all I can do:

Download the sample procmailrc (Text format, 2.9 KB).

# Moshe's procmailrc
# This procmailrc processes email coming in on several email addresses. Mail
# for jehsom@jehsom.com should be filed separately from all other mail.

MAILDIR=$HOME/var/mail
LOGFILE=$HOME/.procmail.log
VERBOSE=on

# trash advertisements
:0
* ^Subject: \[?ADV?[] ]?:?\b
/dev/null

# Killfile billie pendleton parker
:0
* ^From: .*(billiee.*pendleton.*parker)
* ^To: .*(apo|bp4@prism.gatech.edu)
/dev/null

# Killfile ruffside
:0
* ^From:.*ruffside
/dev/null

# Trash lugatgt listproc stupidness
:0 HB
* signoff lugatgt-list
/dev/null

# Remove any duplicate emails (mailing list Cc:'s and stuff)
:0 Wh : $MAILDIR/.idcache.lock
| formail -D 1000 $MAILDIR/.idcache

# Tack on a header to togetherweb mail
:0 hbf
* ^(To|Cc): .*@[^ ]*togetherweb.com
| $HOME/bin/addtotop " *** NOTICE: This mail was sent to togetherweb.com ***\n"

# Remove yahoo advertisements
:0 hbf
* ^From:.*yahoo
| sed '/^_\{50\}$/,$ d'

# Fix incorrect sigdashes
:0 Bfwhb
* ^--$
| sed 's/^--$/-- /'

# Remove cyberbuzz ad
:0 hbf
* ^Message-ID:.*cyberbuzz
| sed '/^-\{49\}$/,$ d'

# Send autoresponse to @togetherweb mail, saying I've changed my address.
:0 chw
* !^FROM_DAEMON
* !^X-Loop: .*moshe@runslinux\.net
* ^(To|Cc): .*(moshe|jehsom)@togetherweb\.com
* !^From: .*(moshe|jehsom|togetherweb\.com)
| { formail -r -I "From: Moshe Jacobson moshe@runslinux.net" \

-a "Date: `date -R`" \
-A "X-Loop: moshe@runslinux.net"; \
cat $HOME/.autoreply.togetherweb; \
cat $HOME/.sig.moshe; \

} | sendmail -oi -t

:0
* ^(To|Cc): .*jehsom
{
:0 BH
* !^FROM_DAEMON
* !^X-Loop: .*jehsom@jehsom.com
* zipscript|debug( mode|ging)|glftpd|\b0day\b|file_id.diz|\brels\b|racers

{
:0 c
* !^From: .*maestro
! maestro@dv8.org

:0 :
jehsom/INBOX-zipscript

}

:0 BH
* !^FROM_DAEMON
* !^X-Loop: jehsom@jehsom\.com
* \b(ftp.(account|server)|serial|vcd|iso|bin.*cue|irc|key|cd[123])\b
jehsom/INBOX-scene

:0 :
jehsom/INBOX
}

# Vacation Autoresponder for moshe@*
#:0 chw
#* ^(To|Cc):.*moshe
#* !^X-Loop: moshe@runslinux\.net
#| { formail -r -I "From: Moshe Jacobson moshe@runslinux.net" \
#-A "X-Loop: moshe@runslinux.net" \
#-a "Date: `date -R`"; \
#cat $HOME/.autoreply.vacation
#cat $HOME/.sig.moshe
#} | sendmail -oi -t

# Mail from my website
:0 hb :
* ^X-Mail-Gateway:.*Doug
moshe/INBOX-web

# Put list mailings in a separate folder
:0 hb :
* ^Subject: (Re: )?\[[^]]*\] .*
* !^(To|Cc): .*(jehsom|moshe)
moshe/INBOX-lists

# Catch mail from root/daemons and put it in a separate folder
:0 hb :
* ^From:.*\(MAILER-DAEMON|root|bugzilla-daemon)@
moshe/INBOX-daemons

# Mail from alert scripts
:0 :
* ^To:.* alert@togetherweb.com
moshe/INBOX-daemons

# Orchstra and list/chat lists
:0 hb :
* ^(To|Cc): .*(orchestra|-(list|chat))@
moshe/INBOX-lists

:0 :
moshe/INBOX

5. Resources

  • man pages: procmail(1), procmailrc(5), procmailex(5), procmailsc(5)

    The man pages for (respectively) procmail itself, the procmailrc file and
    its format, a lot of good examples of procmail recipies, and procmail
    scoring. procmailrc(5) and procmailex(5) should provide pretty much all
    the information you’ll need on how to use procmail. For some really
    advanced scary stuff on mail scoring using procmail (not covered in this
    document), see procmailsc(5).
  • http://www.procmail.org/
    This is the official Procmail website.
  • http://www.iki.fi/era/procmail/mini-faq.html

    This is a very well-written FAQ on Procmail. It should answer the most
    common questions concerning problems you may run into while using
    Procmail.

  • http://www.spambouncer.org/
    The SpamBouncer is a set of procmail instructions that search the headers
    and text of your incoming email to see if it meets one or more of a list
    of conditions for probable spam. It will then either tag the suspected
    spam and return it to your main incoming mailbox, tag the suspected spam,
    delete spam from known spam sources, and file suspected spam in a separate
    folder, send a simulated MAILER-DAEMON daemon "bounce", complain to the
    "upstream providers" of known spammers or spam sites/domains, etc.
  • http://junkfilter.zer0.org/
    junkfilter is a procmail -based filter system for electronic mail. It
    filters sex spam, MLM schemes, and all other types of unsolicited
    commercial e-mail (UCE).
  • http://ceti.pl/~kravietz/spamrc/

    Spamrc is a set of Procmail scoring rules that try to eliminate spam from
    your incoming email based on scoring. Spamrc checks a number of commong
    spam signatures, giving the email some points for each matching rule. The
    points are added and the result is used to decide the probability that the
    message is spam. The scheme works quite well, though it has some false
    positives and misses.

LaTeX

Table of Contents

1. Introduction

TeX is the document markup language and typesetting engine developed by Don Knuth.
LaTeX is a modern set
of TeX macro packages that make the parent language much more versatile and easier to use.
In essence, LaTeX (latex) takes source code in .tex format, and generates
an intermediate .dvi file, which is then processed by dvips to produce
printable .ps. In a nutshell, TeX generates output by gluing rectangles of typeset
material together. Several letters are glued together to form a word; several of these units
are glued together to form paragraphs; paragraph units are glued to figure units and tables
units, all of which are just rectangles, and these form the output page.

To compile your .tex files on the command line (see the man pages for
additional options for dvips):

latex file.tex
dvips -t letter file -o

Or in a Makefile:

%.dvi: %.tex

latex $ & foo
- bibtex $*
%.ps: %.dvi
dvips -t letter -o $@ -C$(NUM) -D$(DPI) $

The intermediate .dvi file is a device independent file which means that it
contains all of the typesetting information, but it is not ready to be sent to a rendering
device (printer). You may view these intermediate files with xdvi. For actual output by
a physical printing device, you will need to convert this using a tool designed to
generate the output you want: dvips makes PostScript files, dvipdf makes
Proprietary Data Formats, you get the point from the names. (It is also possible to
convert from .ps to .pdf, using ps2pdf.)

In the following sections, we will cover how to typeset a report and a letter.
We’ll cover how to create the finest looking math you have ever seen;
how to include tables, figures, and images in
your work; and how to cite references.

2. Basic LaTeX File

All basic LaTeX files have at a bare minimum a declaration of the class of the
document to be rendered, which may have options, and begin and end document tags.
If you LaTeX a document at this stage, you won’t get any errors, but you won’t
get any output. The further addition of LaTeX commands or plain text will be
compiled into your output code, as in this example, the (nearly) smallest LaTeX
that will compile. You will get a free page number at the bottom.

\documentclass{report}
\begin{document}
Hello, World!
\end{document}

The class declaration determines what style the rendered document will take, one particular
document with no content change at all is rendered significantly differently simply by changing
its class from report to article. Options to the different classes include paper
size and orientation, font size, and more. They are specified as in
\documentclass[12pt,landscape]{book}. Optional arguments in LaTeX are indicated
by [ ] pairs, and required arguments by { }.

LaTeX source code is basically marked up plain text, so type in your full text, LaTeX it,
and then begin adding your special items of which there are many examples in this handout.
The first things to modify in your text are to signify a paragraph break with a double
carriage return (a blank line). You may add comments to your source code with the %
character anywhere in a line (the rest of the line is a comment.) To comment a block of
code, you must comment every single line. There are certain specific command related characters
which must be escaped using a backslash to be represented after processing by LaTeX
(examples include: #, %, &, etc. which are coded as \#, \%, \&).

Certain useful macros are pre-defined by LaTeX. Macros available in most class files
include nicely formatted titles (\title{Title}), author name (\author{Your Name}),
and date (\date{\today}}). These are defined before the begin document tag, they are
invoked inside the document using \maketitle.
You may divide your source code into more manageable pieces by simply chopping it into many
.tex files and including them into a master .tex file using
\input{subfile.tex}. (These files do not require begin and end document tags, in
fact those will break the compilation process; input simply drops the block of text in place
of the input macro.) Every logical piece of a document is typically made into a section
(\section{Section Title}), we therefore like to put each section into its own LaTeX
file, which you can see in the source code for this handout.

More than you ever wanted to know about all of the available LaTeX commands can be found
in [2] and [3]. There are also LaTeX packages to do things like slides
(for Powerpoint-like slides, check out prosper:
http://sourceforge.net/projects/prosper/),
to format your work for journal
submissions, for music typesetting (MuSiXTeX), chemistry, and many other
things [1].

3. A Letter to Mom

There aren’t too many differences between a letter and a report in LaTeX. To begin
with, you define \documentclass{letter} and give yourself begin and end document
tags as before. Then you need to define your
addressing information (\address{}), name (\name{}), and signature line
(\signature{}) (also generally your name) outside of the document tags.

Inside the document tags, you can then define multiple letters using

\begin{letter}{Mom's address}
Hello, Mom!
\end{letter}

The begin letter tag takes a second argument, the recipient’s address. Then you supply
an \opening{Dear John:}, the letter body, and a \closing{Sincerely,}.
Enclosures (\encl{}) and carbon copies (\cc{}) can also be added. Then, end
your letter. No matter how many letters you put in the document tags, all will get their
information about you from the macros at the beginning of the file. Just run
latex and dvips as before.

This letter also gives two examples of lists, an enumerated list and an itemized list
(descriptive lists exist as well.)

4. IEEE Math

Mathematical equations are a specialty of LaTeX. There isn’t any equation out there that
isn’t easily described in the LaTeX language. You can put equations
like e=mc2 in the text
(using $e=mc^2$). The $ denotes the start and end
of in-line math. Or you can do equations
outside the text, and they can be numbered or not. The names of the math macros are
generally quite logical as for \lim,
\sum, and \alpha.

5. Tables

Tables are a wonderful way to present certain types of information. In LaTeX we need
to first distinguish between two closely related environments with bearing on their
construction. The first is the obvious table. This environment wraps around a
conceptual table, a block of processed text, but does not itself create any content.
tabular is what actually sets blocks of text and line grids together to make a
readable table. In the tabular options r, l, c == right, left, center
column justifications, and the | (pipe) creates the vertical lines.
\hline makes the horizontal lines, & separates the columns, and \\ ends rows.
Go forth and tabulate!

Other things that apply globally and occur in this example include the \newcommand{}
directive, allowing you to write your own macros, and the \caption{} directive, which is
self-explanatory.

6. Figures

Figures are really quite simple, like the table environment, a figure is
simply a wrapper which signifies the block of imagery. Also, like the
table environment, it takes a \caption{}. The actual content of a figure
can be anything (even a tabular), but it is usually an image. The easiest way to
include an image is using the graphicsx package which you must include
(using \usepackage{}), which you
can then tell that you are using the dvips post-processor. So, we can simply code
\includegraphics[width=3in,height=6cm]{figure.eps} and produce a picture (where
width and height are optional).

7. Citing Sources

To include references to works that you have cited in you research, you need a helper
of LaTeX called bibTeX. bibTeX bibliography entries are stored in a .bib file.
The attached .bib file contains entries for all of the common reference types and
many obscure ones. To include a bibliography file in your document, you need to say
\bibliography{filename}. To cite the references within your document, you simply
use the command \cite{keyword}. Then to actually compile the bibliography into
typesettable citation references you run bibTeX as:

latex file
bibtex file
latex file
latex file
dvips -t letter file -o

We know this seems like of extra compilation. Because LaTeX can only typeset the text that is
available at
run-time, the first run of LaTeX will not generate any citations (or table, figure,
or section references). Running bibTeX generates two files, .bbl and .blg,
which contain the typesetting data we need. We must run LaTeX a second time (and
sometimes a third) for the citation numbers to register in the text. If we fail, we
will see [?].

8. Pictures

9. Resources

  1. Michel Goossens, Sebastian Rahtz, and Frank Mittelbach.
    The LaTeX Graphics Companion: Illustrating Documents with
    TeX and PostScript
    .
    Addison-Wesley, Reading, MA, 1997
  2. Helmut Kopka and Patrick W. Daly.
    A Guide to LaTeXe: Document Preparation for Beginners and
    Advanced Users
    .
    Addison-Wesley Publishing Company, Wokingham, England, second
    edition, 1995.
  3. Leslie Lamport.
    LaTeX: A Document Preparation System User’s Guide and
    Reference Manual
    .
    Addison-Wesley Publishing Company, Reading, MA, second edition, 1994.

External Links

  • http://latex-project.org/
  • http://www.ctan.org/
  • The (not so) Short Introduction to LaTeX – http://ctan.tug.org/tex-archive/info/lshort/english/

Programming Linux Games

Table of Contents

1. Origins of UNIX Gaming

1.1. Traditional UNIX Gaming

  • Mostly console based
  • BSD Games Collection (/usr/games)
  • MUD/MUSH servers
  • Cheesy X11 games
    • xbill, xboard (interface to GNU chess)
    • Nothing requiring sophisticated graphics
  • Nethack
    • Possibly the most addictive game ever written
    • Attempted Dreamcast port

1.2. Transition to Modern Gaming

  • UNIX traditionally has not been much of a “video” gaming platform
  • Workstation-oriented graphics, high-quality but not good for arcade-style
  • PC developed a gaming scene due to commodity hardware
    • Exploded with VGA graphics adapter
    • Although limited, chip had lots of potential
    • Smooth scrolling, decent color depth
  • Linux was still in its infancy; Xlib (slow) and SVGALib (bitchy) were the only options
    • SVGALib is a kickback to the mid 90’s; limited hardware support
    • Xlib is network-friendly and flexible, but not very fast
  • Doom and Abuse ported with SVGALib and X
    • Dave Taylor = Linux geek
    • Ported Doom, and when his now-defunct startup created Abuse, ported that too
    • Good ports, but finicky

1.3. Advent of 3D Accelerators

  • Much heavier reliance on driver support
    • battle used to be getting vendors to supply 2D programming specs
    • now 2D specs are easy; 3D specs are more difficult
    • at first, few vendors supported Linux
  • Programming specs are hard to acquire
  • First big step: Daryll Strauss ported Glide (3Dfx) to Linux
    • Brian Paul wrote Mesa, a decent software OpenGL implementation
    • Mostly complete and correct, but SLOW
    • Daryll Strauss convinced 3Dfx to let him port Glide
    • Soon Mesa was patched for Glide output, and became viable for games
  • Limited hacks to use Glide for accelerated OpenGL

1.4. Utah GLX

  • Proper integration of XFree86 and various 3D accelerators
    • Glide is not a general purpose graphics API, and often doesn’t play nice
    • SGI designed GLX specs, but Mesa + Glide didn’t really use it
    • Utah GLX is a hardware accelerated GLX extension for XFree86
  • Reasonably good performance, but lousy AGP support
    • lacked infrastructure for modern 3D accelerators
    • was still playing the catch up game
  • SGI released GLX source code, leading to better compatibility
    • Big publicity, not sure how much use it was, but much appreciated
  • Linux became a viable target for games; Loki released Heavy Gear II, Soldier of Fortune, and other Utah GLX oriented games
    • Precision Insight helped develop drivers

1.5. Direct Rendering Interface, DRI

  • Attempt to overcome intrinsic limitations in Utah GLX’s design
    • DRI has a lot to do with AGP memory allocation and management
  • Reached a point of stability about when Loki reached a point of bankruptcy
    • Loki spent a lot of effort trying to make games work on shaky OpenGL grounds
  • Excellent 3D performance on supported hardware
    • Support is finally getting there on many cards; still playing the catch up game
  • Requires DRM (Direct Rendering Manager) kernel module for memory allocation
    • DRM is an interface for allocating necessary memory buffers, mainly also requires AGPGART driver

2. The Linux Gaming “Industry”

2.1. The NVIDIA Menace

  • Seems to have a bad case of
    "not invented here"
    syndrome
  • Xinerama (dual head support) apparently not good enough for them; on their own with TwinView
    • Not really a problem, unless you want to do dual head with a non-NVIDIA card
  • Drivers share a codebase with Windows drivers
  • Developed in-house by NVIDIA, and reasonably well supported
    • Dedicated support staff, #nvidia on openprojects, frequent updates

2.2. The ATI Factor

  • It appears that ATI is sick of dealing with Precision Insight (commercial Linux video driver developer) and intends to develop drivers in-house
    • Precision Insight charges an insane hourly rate
    • Most of PI’s top employees have left
  • ATI does not have a good reputation for driver development, but hopefully something will become of this
  • Radeon 8500 is reportedly "schweeeeeeet"

2.3. Available Games

  • Fall into three categories: ports of Windows games
    (Loki,
    Tribsoft,
    Icculus),
    commercial games with Linux support from original developer
    (id,
    Epic,
    Sunspire), and hobbyist
    developments (FreeCiv,
    Stratagus,
    Worldforge)

  • Porting business is on shaky grounds; IT industry crash didn’t help
    • Don’t know if anyone is actually making money
    • Porting is a viable business as long as you don’t have to make money
  • Commercial support relies mainly on developer sentiments and off-hours hacking
  • Hobbyist development is the future
    • We’ve built a kernel, two major desktop environments, created the best firewall system ever…
    • … why can’t we make good games?
    • Lack of artists, perhaps?

3. Writing Linux Games

3.1. Simple DirectMedia Layer, SDL

  • Portable 2D video acceleration abstraction layer
  • Pass-through support for OpenGL; still useful in 3D games for input and audio handling
  • Sub-APIs for video, audio, input, threading, CD-ROM access, and file IO abstraction
  • Core of SDL is simple and tiny, but much additional functionality (image loading, high-level sprite management, etc) is available as add-on libraries
  • Strongest following of any multimedia toolkit

3.2. OpenAL

  • Portable 3D audio library; a mix of Creative’s EAX, OpenGL, and open source
  • Was doing quite well when both Loki and Creative were working on it
  • However, Loki is unable to continue support, so OpenAL has been stagnating
  • Some disagree with AL’s design, but I consider it clean and effective

3.3. OpenML

  • Corporate-backed portable multimedia library
  • Lot of potential, if the corporations can stay tame
  • Appears to be a bid to obsolete DirectX; if this happens, it won’t be for a while
  • Specification freely available

4. Resources

Mac OS X

Table of Contents

1. Introduction

MacOS X (pronounced "Mac Oh Ess Ten") is the name of Apple’s latest version
of the Macintosh operating system. Though it shares a similar name, it
shares very little in common with previous releases of MacOS.

MacOS X is a UNIX-like operating system with the look and feel of a Macintosh
computer. In 1996, Apple acquired NeXT, Inc. (the acquisition that brought
Steve Jobs back to Apple). It was the NeXT codebase (NEXTSTEP/OPENSTEP) that
Apple used as the foundation for what eventually became MacOS X.

The following outline aims to point some of the interesting aspects of
MacOS X, both from a user and developer perspective.

2. The Graphical User Interface

The graphical user interface of any
operating system is the part that is often evaluated in magazine reviews or
is touted in discussions about which OS is better. True, the GUI is an
important part of the operating system, but not the only part. The MacOS X
GUI is composed of several key components.

2.1. Dock

The Dock was an idea brought over from NEXTSTEP and
OPENSTEP. Dock-like interfaces as also quite popular among UNIX window
managers, such as Window Maker and AfterStep. The MacOS X dock is
virtually a complete reworking of the classic NEXTSTEP dock, but does
have some things in common. To place items on the dock, a user just drags
the object to the dock. To remove an item, you drag it off the dock. The
dock can be positioned along any edge of the screen and also offers some
special effects to make it that much more interesting. The Dock under
MacOS X replaces the Apple menu from previous releases of MacOS.

The Dock

2.2. Finder

Finder is the name of the GUI "shell" that runs when
you log in to MacOS X. Finder is responsible for putting the menu bar
at the top of the screen and also placing icons on the desktop. Finder
can be stopped and restarted without rebooting, a feature that previous
releases lacked (Well, this isn’t totally true. You could restart the
Finder in previous MacOS releases, but it wasn’t an officially
supported feature and was rather flaky.). Finder is also responsible for
the file manager windows that pop up when you open a folder or drive icon.
The views available in the MacOS X Finder are similar to NEXTSTEP
(horizontal scrolling view).

2.3. Login Window

Since MacOS X is a UNIX-like operating system,
it requires one to log in at boot time. Users of xdm and other X11-based
login managers may notice a similarity to the MacOS X login screen. You
can get a console login from the MacOS X graphical login window. Just
type ">console" as the username and click Login.

3. The UNIX part

The "UNIX part" of MacOS X is called Darwin. This
is the part that’s open source and available to anyone. It runs on both
PowerPC Macintosh computers as well as Intel-based computers. Darwin
consists of the kernel, userland and system tools, and networking code.
Basically it’s everything except the graphical user interface.

3.1. Technologies

Apple loves to associate fancy names with every
software project they currently have going. These names are thrown around
and since they usually bear no meaning towards the actual project, it can
sometimes get confusing. Here are the common ones you’ll hear when
discussing MacOS X:

  1. DisplayPDF: The underlying concept behind
    graphics primitives in MacOS X. All 2D graphics are stored as PDF
    primitives, which expands on the NEXTSTEP usage of DisplayPostScript
    technology.
  2. Aqua: The name of the theme present in a
    default MacOS X install. This is what people try to clone for KDE and
    other environments.
  3. Quartz: Roughly equivalent to the X server
    on a UNIX system. Quartz draws stuff on the screen.

Several existing and new technologies came
together to create Darwin.

  1. FreeBSD: The TCP/IP stack as well as some
    of the userland tools from FreeBSD 3.2 were incorporated into the
    Darwin source tree.
  2. NEXTSTEP/OPENSTEP: The Mach+BSD kernel
    architecture from NEXTSTEP provided the foundation on which the
    Darwin kernel was built. Filesystem layout, packaging system, and
    object concepts were incorporated from NEXTSTEP.
  3. GNU: Many GNU programs were incorporated
    into the Darwin source tree, such as gcc, GNU make, and grep. For
    the most part, userland tools in Darwin and MacOS X are the same as
    what you would find on a Linux distribution.
  4. NetBSD: Very portable code. Whenever a
    FreeBSD tool couldn’t easily be ported, the Darwin team went to
    the NetBSD source tree. This is also a trick used by many Linux
    distributions.

3.2. Booting

The system boot procedure currently resembles that
of NEXTSTEP. It’s almost a pure BSD-style boot system, with init
running /etc/rc. The change under OS X is the addition of
SystemStarter. The rc scripts handle basic system init stuff, such as
loading kernel modules and mounting filesystems. After the critical
steps, SystemStarter takes over and brings up services, networking, and
the GUI stuff. The SystemStarter works like a SysV init system. Each
"service" can be found in /System/Library/StartupItems. Each service
is a subdirectory containing resource files and a script that actually
does the work. Apple’s goal is to eventually move to SystemStarter for
all boot time stuff.

3.3. Filesystem Layout

The filesystem layout may look a bit
unfamiliar if you are a Linux user, but it’s not too difficult to
follow.

  1. Standard Directories: You’ll find the
    /bin, /dev, /sbin, and /usr directories. Those pretty much follow
    a standard layout. The /etc, /tmp, and /var directories are
    actually symlinks into directories of the same name under
    /private.
  2. /private: The /private directory holds
    files specific to the local machine (configuration files, temporary
    data, swap data, and logs).
  3. Capital Letter Directories: All of the
    directories beginning with a capital letter are fairly self
    explanatory. /System is for system-specific services and libraries,
    /Users is home directories, /Library is like /lib, developer tools
    and documentation is in /Developer, bundled applications are in
    /Applications. You can add more directories like this, but don’t
    remove the ones created by the installer.

3.4. Things Missing

If you’re familiar with another UNIX-like
operating system, you may be surprised to not find some fairly common
tools.

  1. mount/umount: That’s right, no mount or
    umount command. So how do filesystems get mounted? Just like under
    MacOS. Everything found is mounted at boot time to the location
    specified in the volume header.
  2. NFS: Support for NFS is lacking, and
    without a mount or umount command, it’s even more difficult to
    add.
  3. virtual terminals: Missing on many
    commercial UNIX-like operating systems as well, but still a feature
    liked by many. But, with the GUI, you can run multiple terminal
    windows on the same screen, a la X.

3.5. Kernel

The Darwin kernel, called xnu, is a combined
source tree starting with NEXTSTEP and merging in FreeBSD and NetBSD
stuff, and adding some new things (like the object oriented device
driver layer called IOKit). It can be compiled for PowerPC or Intel
machines. On MacOS X, you’ll find the main kernel file in / and named
mach_kernel.

4. The Programmer Part

MacOS X offers many things specifically
for the developer. There are tools for bringing classic MacOS
applications to MacOS X, as well as tools for bringing UNIX applications
to MacOS X. (much of the information below was gathered from
http://fink.sourceforge.net/doc/porting/)

4.1. ProjectBuilder IDE vs. GNU development tools

Seasoned MacOS
developers will prefer the ProjectBuilder IDE that comes with the
developer CD. It’s a nice GUI for editing code and designing UI
components. Seasoned UNIX developers will enjoy the standard set of
UNIX development tools available (cc, make, bison, yacc, etc.).

4.2. Mach-O binary format

The binary format used by MacOS X is
the Mach-O format. It’s not ELF and should not be confused with ELF
(even though it offers roughly the same functionality). One cool
feature of Mach-O is "fat" binaries, that is, having multiple
architectures supported by a single binary.

4.3. Compiler and Preprocessor

The compiler (/usr/bin/cc) is
based on gcc 2.95.2 with Apple modifications. The code is actively
being prepared for merging into the mainline gcc source tree, but
it’s difficult. Among the additions by Apple is the support for
AltiVec registers. The preprocessor (cpp) is a custom Apple creation
that works nothing like GNU cpp. Most notably is the precomp problem
experienced by people trying to compile open source software on
MacOS X. Precomps are precompiled headers. The special cpp can read
precompiled headers (binary data files consisting of all the tokens
and dependency information) and regular headers. Problem is, it does
not always work with regular headers. The -no-cpp-precomp flag is
usually preferred when compiling software on MacOS X. (precomp
information gathered from
www.darwinfo.org)

4.4. Libtool

GNU libtool has problems on MacOS X. There are
patches for both the 1.3 and 1.4 versions to correct shared library
generation. MacOS X ships with GNU libtool 1.3.5 patched for MacOS X,
but it’s not completely fixed. The patch can be found at
http://fink.sourceforge.net/doc/porting/libtool.php.

4.5. Libraries

  1. .a and .dylib: Traditional static
    libraries are offered through the .a files. Dynamic libraries end
    with the name .dylib and do not function the same as .so’s on
    Linux.
  2. dyld: The libdl.so equivalent on MacOS
    X
  3. dlcompat: To help port software from UNIX
    to MacOS X, the dlcompat library was created to translate dlopen()
    calls to the appropriate dyld action.
  4. Versioning and Naming: The dynamic linker
    checks major and minor version numbers, unlike Linux. Naming also
    differs slightly. The version is part of the library name, with
    .dylib being at the very end of the filename. This makes it a bit
    easier to specify certain library versions for certain
    compiles.
  5. Modules, Libs, and Bundles: In Linux, a
    shared library and loadable module for a program ("plug-in") are the
    same. Under MacOS X, a shared library is a .dylib file. A loadable
    module is a bundle ending with .bundle (but sometimes .so).
    Loadable modules are loaded and unloaded through dyld, which is why
    the dlcompat interface exists for ported software.
  6. Compiler Flags: Common symbols are not
    allowed in shared libraries, so you need to use -fno-common.
    Position independent code is default, so there’s no need to specify
    a PIC flag. To build a loadable module, use the -bundle,
    -flat_namespace and "-undefined suppress" compiler flags.

4.6. Linker and Assembler

The assembler is GNU-derived, but the
linker is not GNU at all, which presents problems for GNU libtool and
other open source software.

4.7. KEXTS

Kernel extensions are similar to modules in the Linux
kernel. They can be added to the kernel without recompiling it. A
KEXT is a bundle, so it is a directory containing the actual loadable
module and any control files or other resources used by the driver.
The driver configuration file is an XML document called Info-macos.xml.

4.8. Packaging

A MacOS X package is a special directory
(Bundle) that ends with .pkg. In this directory are icons, text files
displayed by the installer, scripts, and the actual package contents.

  1. PackageMaker & /usr/bin/package: MacOS X
    currently offers two built-in methods for generating software
    packages. The one that receives the most attention is the
    PackageMaker application. It’s a graphical, fill-in-the-blanks
    program that creates the package for you. /usr/bin/package is the
    NEXTSTEP packaging tool, updated for MacOS X. It should be noted
    that both programs generate packages compatible with Installer.app,
    but that have different internal formats.
  2. Bill of Materials: The package manifest
    is a binary data file called the bill of materials, or bom for
    short. The bom lists the contents of the package (symlinks, files,
    directories), the permissions and ownerships for each item, and a
    32-bit checksum of the file.
  3. Installer.app: This program is
    responsible for adding packages to the system. It reads in the
    contents of a pkg bundle and walks you through the installation
    process. A record of the installed package is retained in the
    /Library/Receipts directory. NOTE: Currently, there is no way to
    remove, query, or upgrade packages. You can only install and make
    packages at this time.
  4. How to distribute pkg Bundles: Since

    MacOS X packages are directories, it’s rather difficult to
    distribute them online. You can use standard tools, such as tar to
    make the package a single file, but the end user will have to untar
    the package before being able to install it. The preferred solution
    is to use Disk Copy to make a disk image file the size you need and
    drop the package in there. Disk image files are self mounting on
    MacOS X.

4.9. Cocoa API

Cocoa is the Objective-C API native to MacOS X.
It originated in OPENSTEP.

4.10. Carbon API

Carbon is the API that works on MacOS 8.6 or
higher and MacOS X.

5. Other Sources of Information

  1. http://www.apple.com/macosx/
    – Official Apple web site for MacOS X
  2. http://www.macosx.org/ – Nice
    site with writeups on how to do things and reviews on applications.
    Somewhat wordy and difficult to navigate, but useful.
  3. http://www.darwinfo.org/
    Independent web site devoted to information about Darwin, the open source
    operating system on which MacOS X is based. FAQs and HOWTO documents,
    plus news about Darwin developments.
  4. http://www.stepwise.com/
    Think Wisely
  5. http://www.macosxhints.com/
    Nice site with hints and such related to MacOS X
  6. http://mrcla.com/XonX/ – Installing
    and using XFree86 4.x on Darwin and MacOS X.

6. Copyright

Copyright 2001 David L. Cantrell Jr., Atlanta, GA, USA

by: David L. Cantrell Jr. (david@burdell.org)
Permission to reprint this information granted provided the above copyright
notice remains attached.