Almost Everything You Ever Wanted To Know About Security

Almost Everything You Ever Wanted To Know About Security*
*(but were afraid to ask!)

This document is meant to answer some of the questions which regularly
appear in the Usenet newsgroups “comp.security.misc” and “alt.security”,
and is meant to provide some background to the subject for newcomers to
that newsgroup.

This FAQ is maintained by Alec Muffett (aem@aber.ac.uk, uknet!aber!aem),
with contributions from numerous others [perhaps]. The views expressed
in the document are the personal views of the author(s), and it should
not be inferred that they are necessarily shared by anyone with whom the
author(s) are now, or ever may be, associated.

Many thanks go to (in no particular order): Steve Bellovin, Matt Bishop,
Mark Brader, Ed DeHart, Dave Hayes, Jeffrey Hutzelman, William LeFebvre,
Wes Morgan, Rob Quinn, Chip Rosenthal, Wietse Venema, Gene Spafford,
John Wack and Randall Atkinson.

Disclaimer: Every attempt is made to ensure that the information
contained in this FAQ is up to date and accurate, but no responsibility
will be accepted for actions resulting from information gained herein.

Questions which this document addresses:

Q.1 What are alt.security and comp.security.misc for?
Q.2 Whats the difference between a hacker and a cracker?
Q.3 What is “security through obscurity”
Q.4 What makes a system insecure?
Q.5 What tools are there to aid security?
Q.6 Isn’t it dangerous to give cracking tools to everyone?
Q.7 Where can I get these tools?
Q.8 Why and how do systems get broken into?
Q.9 Who can I contact if I get broken into?
Q.10 What is a firewall?
Q.11 Why shouldn’t I use setuid shell scripts?
Q.12 Why shouldn’t I leave “root” permanently logged on the console?
Q.13 Why shouldn’t I create Unix accounts with null passwords?
Q.14 What security holes are associated with X-windows (and other WMs)?
Q.15 What security holes are associated with NFS?
Q.16 How can I generate safe passwords?
Q.17 Why are passwords so important?
Q.18 How many possible passwords are there?
Q.19 Where can I get more information?
Q.20 How silly can people get?

—————————————————————————

Q.1 What are alt.security and comp.security.misc for?

Comp.security.misc is a forum for the discussion of computer security,
especially those relating to Unix (and Unix like) operating systems.
Alt.security used to be the main newsgroup covering this topic, as well
as other issues such as car locks and alarm systems, but with the
creation of comp.security.misc, this may change.

This FAQ will concentrate wholly upon computer related security issues.

The discussions posted range from the likes of “What’s such-and-such
system like?” and “What is the best software I can use to do so-and-so”
to “How shall we fix this particular bug?”, although there is often a
low signal to noise ratio in the newsgroup (a problem which this FAQ
hopes to address).

The most common flamewars start when an apparent security novice posts a
message saying “Can someone explain how the such-and-such security hole
works?” and s/he is immediately leapt upon by a group of self appointed
people who crucify the person for asking such an “unsound” question in a
public place, and flame him/her for “obviously” being a cr/hacker.

Please remember that grilling someone over a high flame on the grounds
that they are “a possible cr/hacker” does nothing more than generate a
lot of bad feeling. If computer security issues are to be dealt with in
an effective manner, the campaigns must be brought (to a large extent)
into the open.

Implementing computer security can turn ordinary people into rampaging
paranoiacs, unable to act reasonably when faced with a new situation.
Such people take an adversarial attitude to the rest of the human race,
and if someone like this is in charge of a system, users will rapidly
find their machine becoming more restrictive and less friendly (fun?) to
use.

This can lead to embarrasing situations, eg: (in one university) banning
a head of department from the college mainframe for using a network
utility that he wasn’t expected to. This apparently required a lot of
explaining to an unsympathetic committee to get sorted out.

A more sensible approach is to secure a system according to its needs,
and if its needs are great enough, isolate it completely. Please, don’t
lose your sanity to the cause of computer security; it’s not worth it.

Q.2 What’s the difference between a hacker and a cracker?

Lets get this question out of the way right now:

On USENET, calling someone a “cracker” is an unambiguous statement that
some person persistently gets his/her kicks from breaking from into
other peoples computer systems, for a variety of reasons. S/He may pose
some weak justification for doing this, usually along the lines of
“because it’s possible”, but most probably does it for the “buzz” of
doing something which is illicit/illegal, and to gain status amongst a
peer group.

Particularly antisocial crackers have a vandalistic streak, and delete
filestores, crash machines, and trash running processes in pursuit of
their “kicks”.

The term is also widely used to describe a person who breaks copy
protection software in microcomputer applications software in order to
keep or distribute free copies.

On USENET, calling someone a “hacker” is usually a statement that said
person holds a great deal of knowledge and expertise in the field of
computing, and is someone who is capable of exercising this expertise
with great finesse. For a more detailed definition, readers are
referred to the Jargon File [Raymond].

In the “real world”, various media people have taken the word “hacker”
and coerced it into meaning the same as “cracker” – this usage
occasionally appears on USENET, with disastrous and confusing results.

Posters to the security newsgroups should note that they currently risk
a great deal of flamage if they use the word “hacker” in place of
“cracker” in their articles.

NB: nowhere in the above do I say that crackers cannot be true hackers.
It’s just that I don’t say that they are…

Q.3 What is “security through obscurity”

Security Through Obscurity (STO) is the belief that a system of any sort
can be secure so long as nobody outside of its implementation group is
allowed to find out anything about its internal mechanisms. Hiding
account passwords in binary files or scripts with the presumption that
“nobody will ever find it” is a prime case of STO.

STO is a philosophy favoured by many bureaucratic agencies (military,
governmental, and industrial), and it used to be a major method of
providing “pseudosecurity” in computing systems.

Its usefulness has declined in the computing world with the rise of open
systems, networking, greater understanding of programming techniques, as
well as the increase in computing power available to the average person.

The basis of STO has always been to run your system on a “need to know”
basis. If a person doesn’t know how to do something which could impact
system security, then s/he isn’t dangerous.

Admittedly, this is sound in theory, but it can tie you into trusting a
small group of people for as long as they live. If your employees get
an offer of better pay from somewhere else, the knowledge goes with
them, whether the knowledge is replaceable or not. Once the secret gets
out, that is the end of your security.

Nowadays there is also a greater need for the ordinary user to know
details of how your system works than ever before, and STO falls down a
as a result. Many users today have advanced knowledge of how their
operating system works, and because of their experience will be able to
guess at the bits of knowledge that they didn’t “need to know”. This
bypasses the whole basis of STO, and makes your security useless.

Hence there is now a need is to to create systems which attempt to be
algorithmically secure (Kerberos, Secure RPC), rather than just
philosophically secure. So long as your starting criteria can be met,
your system is LOGICALLY secure.

“Shadow Passwords” (below) are sometimes dismissed as STO, but this is
incorrect, since (strictly) STO depends on restricting access to an
algorithm or technique, whereas shadow passwords provide security by
restricting access to vital data.

Q.4 What makes a system insecure?

Switching it on. The adage usually quoted runs along these lines:

“The only system which is truly secure is one which is switched off
and unplugged, locked in a titanium lined safe, buried in a concrete
bunker, and is surrounded by nerve gas and very highly paid armed
guards. Even then, I wouldn’t stake my life on it.”

(the original version of this is attributed to Gene Spafford)

A system is only as secure as the people who can get at it. It can be
“totally” secure without any protection at all, so long as its continued
good operation is important to everyone who can get at it, assuming all
those people are responsible, and regular backups are made in case of
hardware problems. Many laboratory PC’s quite merrily tick away the
hours like this.

The problems arise when a need (such as confidentiality) has to be
fulfilled. Once you start putting the locks on a system, it is fairly
likely that you will never stop.

Security holes manifest themselves in (broadly) four ways:

1) Physical Security Holes.

– Where the potential problem is caused by giving unauthorised persons
physical access to the machine, where this might allow them to perform
things that they shouldn’t be able to do.

A good example of this would be a public workstation room where it would
be trivial for a user to reboot a machine into single-user mode and muck
around with the workstation filestore, if precautions are not taken.

Another example of this is the need to restrict access to confidential
backup tapes, which may (otherwise) be read by any user with access to
the tapes and a tape drive, whether they are meant to have permission or
not.

2) Software Security Holes

– Where the problem is caused by badly written items of “privledged”
software (daemons, cronjobs) which can be compromised into doing things
which they shouldn’t oughta.

The most famous example of this is the “sendmail debug” hole (see
bibliography) which would enable a cracker to bootstrap a “root” shell.
This could be used to delete your filestore, create a new account, copy
your password file, anything.

(Contrary to popular opinion, crack attacks via sendmail were not just
restricted to the infamous “Internet Worm” – any cracker could do this
by using “telnet” to port 25 on the target machine. The story behind a
similar hole (this time in EMACS) is described in [Stoll].)

New holes like this appear all the time, and your best hopes are to:

a: try to structure your system so that as little software as possible
runs with root/daemon/bin privileges, and that which does is known to
be robust.

b: subscribe to a mailing list which can get details of problems
and/or fixes out to you as quickly as possible, and then ACT when you
receive information.

3) Incompatible Usage Security Holes

– Where, through lack of experience, or no fault of his/her own, the
System Manager assembles a combination of hardware and software which
when used as a system is seriously flawed from a security point of view.
It is the incompatibility of trying to do two unconnected but useful
things which creates the security hole.

Problems like this are a pain to find once a system is set up and
running, so it is better to build your system with them in mind. It’s
never too late to have a rethink, though.

Some examples are detailed below; let’s not go into them here, it would
only spoil the surprise.

4) Choosing a suitable security philosophy and maintaining it.

>From: Gene Spafford <spaf@cs.purdue.edu>
>The fourth kind of security problem is one of perception and
>understanding. Perfect software, protected hardware, and compatible
>components don’t work unless you have selected an appropriate security
>policy and turned on the parts of your system that enforce it. Having
>the best password mechanism in the world is worthless if your users
>think that their login name backwards is a good password! Security is
>relative t

Leave a comment