JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE Archives

CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE  2002

CYBER-SOCIETY-LIVE 2002

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

[CSL]: CRYPTO-GRAM, May 15, 2002

From:

John Armitage <[log in to unmask]>

Reply-To:

The Cyber-Society-Live mailing list is a moderated discussion list for those interested <[log in to unmask]>

Date:

Thu, 16 May 2002 09:34:09 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (1188 lines)

From: Bruce Schneier
To: [log in to unmask]
Sent: 15/05/02 22:14
Subject: CRYPTO-GRAM, May 15, 2002

                  CRYPTO-GRAM

                  May 15, 2002

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            [log in to unmask]
          <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and
commentaries on computer security and cryptography.

Back issues are available at
<http://www.counterpane.com/crypto-gram.html>.  To subscribe, visit
<http://www.counterpane.com/crypto-gram.html> or send a blank message to

[log in to unmask]

Copyright (c) 2002 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      Secrecy, Security, and Obscurity
      Crypto-Gram Reprints
      News
      Counterpane News
      Fun with Fingerprint Readers
      Comments from Readers


** *** ***** ******* *********** *************

       Secrecy, Security, and Obscurity



A basic rule of cryptography is to use published, public, algorithms and

protocols.  This principle was first stated in 1883 by Auguste
Kerckhoffs:
in a well-designed cryptographic system, only the key needs to be
secret;
there should be no secrecy in the algorithm.  Modern cryptographers have

embraced this principle, calling anything else "security by
obscurity."  Any system that tries to keep its algorithms secret for
security reasons is quickly dismissed by the community, and referred to
as
"snake oil" or even worse.  This is true for cryptography, but the
general
relationship between secrecy and security is more complicated than
Kerckhoffs' Principle indicates.

The reasoning behind Kerckhoffs' Principle is compelling.  If the
cryptographic algorithm must remain secret in order for the system to be

secure, then the system is be less secure.  The system is less secure,
because security is affected if the algorithm falls into enemy hands.
It's
harder to set up different communications nets, because it would be
necessary to change algorithms as well as keys.  The resultant system is

more fragile, simply because there are more secrets that need to be
kept.  In a well-designed system, only the key needs to be secret; in
fact,
everything else should be assumed to be public.  Or, to put it another
way,
if the algorithm or protocol or implementation needs to be kept secret,
than it is really part of the key and should be treated as such.

Kerckhoffs' Principle doesn't speak to actual publication of the
algorithms
and protocols, just the requirement to make security independent of
their
secrecy.  In Kerckhoffs' day, there wasn't a large cryptographic
community
that could analyze and critique cryptographic systems, so there wasn't
much
benefit in publication.  Today, there is considerable benefit in
publication, and there is even more benefit from using already
published,
already analyzed, designs of others.  Keeping these designs secret is
needless obscurity.  Kerckhoffs' Principle says that there should be no
security determent from publication; the modern cryptographic community
demonstrates again and again that there is enormous benefit to
publication.

The benefit is peer review.  Cryptography is hard, and almost all
cryptographic systems are insecure.  It takes the cryptographic
community,
working over years, to properly vet a system.  Almost all secure
cryptographic systems were developed with public and published
algorithms
and protocols.  I can't think of a single cryptographic system developed
in
secret that, when eventually disclosed to the public, didn't have flaws
discovered by the cryptographic community.  And this includes the
Skipjack
algorithm and the Clipper protocol, both NSA-developed.

A corollary of Kerckhoffs' Principle is that the fewer secrets a system
has, the more secure it is.  If the loss of any one secret causes the
system to break, then the system with fewer secrets is necessarily more
secure.  The more secrets a system has, the more fragile it is.  The
fewer
secrets, the more robust.

This rule generalizes to other types of systems, but it's not always
easy
to see how.  The fewer the secrets there are, the more secure the system

is.  Unfortunately, it's not always obvious what secrets are
required.  Does it make sense for airlines to publish the rules by which

they decide which people to search when they board the aircraft?  Does
it
make sense for the military to publish its methodology for deciding
where
to place land mines?  Does it make sense for a company to publish its
network topology, or its list of security devices, or the rule-sets for
its
firewalls?  Where is secrecy required for security, and where it is mere

obscurity?

There is a continuum of secrecy requirements, and different systems fall
in
different places along this continuum.  Cryptography, because it of its
mathematical nature, allows the designer to compress all the secrets
required for security into a single key (or in some cases, multiple
keys).  Other systems aren't so clean.  Airline security, for example,
has
dozens of potential secrets: how to get out on the tarmac, how to get
into
the cockpit, the design of the cockpit door, the procedures for
passenger
and luggage screening, the exact settings of the bomb-sniffing
equipment,
the autopilot software, etc.  The security of the airline system can be
broken if any of these secrets are exposed.

This means that airline security is fragile.  One group of people knows
how
the cockpit door reinforcement was designed.  Another group has
programmed
screening criteria into the reservation system software.  Other groups
designed the various equipment used to screen passengers.  And yet
another
group knows how to get onto the tarmac and take a wrench to the
aircraft.  The system can be attacked through any of these ways.  But
there's no obvious way to apply Kerckhoffs' Principle to airline
security:
there are just too many secrets and there's no way to compress them into
a
single "key."  This doesn't mean that it's impossible to secure an
airline,
only that it is more difficult.  And that fragility is an inherent
property
of airline security.

Other systems can be analyzed similarly.  Certainly the exact placement
of
land mines is part of the "key," and must be kept secret.  The algorithm

used to place the mines is not secret to the same degree, but keeping it

secret could add to security.  In a computer network, the exact firewall

and IDS settings are more secret than the placement of those devices on
the
network, which is in turn more secret than the brand of devices
used.  Network administrators will have to decide exactly what to keep
secret and what to not worry about.  But the more secrets, the more
difficult and fragile the security will be.

Kerckhoffs' Principle is just one half of the decision process.  Just
because security does not require that something be kept secret, it
doesn't
mean that it is automatically smart to publicize it.  There are two
characteristics that make publication so powerful in cryptography.  One,

there is a large group of people who are capable and willing to evaluate

cryptographic systems, and publishing is a way to harness the expertise
of
those people.  And two, there are others who need to build cryptographic

systems and are on the same side, so everyone can learn from the
mistakes
of others.  If cryptography did not have these characteristics, there
would
be no benefit in publishing.

When making decisions about other security systems, it's important to
look
for these two characteristics.  Imagine a "panic button" in an airplane
cockpit.  Assume that the system was designed so that its publication
would
not affect security.  Should the government publish it?  The answer
depends
on whether or not there is a public community of professionals who can
critique the design of such panic buttons.  If there isn't, then there's
no
point in publishing.

Missile guidance algorithms is another example.  Would the government be

better off publishing their algorithms for guiding missiles?  I believe
the
answer is no, because the system lacks the second characteristic
above.  There isn't a large community of people who can benefit from the

information, but there are potential enemies that could benefit from the

information.  Therefore, it is better for the government to keep the
information classified and only disclose it to those it believes should
know.

Because the secrecy required for security are rarely black and while,
publishing now becomes a security trade-off.  Does the security benefit
of
secrecy outweigh the benefits of publication?  It might not be easy to
make
the decision, but the decision is straightforward.  Historically, the
NSA
did not publish its cryptographic details -- not because their secrecy
improved security, but because they did not want to give their
Cold-War-world enemies the benefit of their expertise.

Kerckhoffs' Principle generalizes to the following design guideline:
minimize the number of secrets in your security system.  To the extent
that
you can accomplish that, you increase the robustness of your security.
To
the extent you can't, you increase its fragility.  Obscuring system
details
is a separate decision from making your system secure regardless of
publication; it depends on the availability of a community that can
evaluate those details and the relative communities of "good guys" and
"bad
guys" that can make use of those details to secure other systems.


Kerckhoffs' Paper (in French):
<http://www.cl.cam.ac.uk/~fapp2/kerckhoffs/la_cryptographie_militaire_i.
htm>

Another essay along similar lines:
<http://online.securityfocus.com/columnists/80>


** *** ***** ******* *********** *************

             Crypto-Gram Reprints



Crypto-Gram is currently in its fifth year of publication.  Back issues
cover a variety of security-related topics, and can all be found on
<http://www.counterpane.com/crypto-gram.html>.  These are a selection of

articles that appeared in this calendar month in other years.

What Military History Can Teach Computer Security, Part II
<http://www.counterpane.com/crypto-gram-0105.html#1>

The Futility of Digital Copy Protection
<http://www.counterpane.com/crypto-gram-0105.html#3>

Security Standards
<http://www.counterpane.com/crypto-gram-0105.html#7>

Safe Personal Computing
<http://www.counterpane.com/crypto-gram-0105.html#8>

Computer Security: Will we Ever Learn?
<http://www.counterpane.com/crypto-gram-0005.html#ComputerSecurityWillWe
Ever
Learn>

Trusted Client Software
<http://www.counterpane.com/crypto-gram-0005.html#TrustedClientSoftware>

The IL*VEYOU Virus (Title bowdlerized to foil automatic e-mail traps.)
<http://www.counterpane.com/crypto-gram-0005.html#ilyvirus>

The Internationalization of Cryptography
<http://www.counterpane.com/crypto-gram-9905.html#international>

The British discovery of public-key cryptography
<http://www.counterpane.com/crypto-gram-9805.html#nonsecret>


** *** ***** ******* *********** *************

                      News



Microsoft is making efforts to deal with the problem of SOAP tunneling:
<http://www.theregus.com/content/4/24624.html>
<http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnglob
spec
/html/ws-security.asp>

Voice mail security can be just as important as network security:
<http://story.news.yahoo.com/news?tmpl=story&u=/ap/20020414/ap_on_hi_te/
purl
oined_voice_mail_3>
<http://www.computerworld.com/securitytopics/security/story/0,10801,7004
8,00
.html>

Interesting interviews with Ralph Merkle and Whitfield Diffie about the
invention of public-key cryptography:
<http://www.itas.fzk.de/mahp/weber/merkle.htm>
<http://www.itas.fzk.de/mahp/weber/diffie.htm>

Airline security comic:
<http://images.ucomics.com/comics/bz/2002/bz020417.gif>

Hackers target Israel:
<http://www.computing.vnunet.com/News/1130941>

Slashdot discussion of my "Liability and Security" essay:
<http://slashdot.org/article.pl?sid=02/04/21/0058214&mode=thread>

A typical network administrator is deluged with security advisories,
warnings, alerts, etc.  Much of it is hype designed to push a particular

product or service.
<http://www.newsfactor.com/perl/story/17273.html>
<http://www.cnn.com/2002/TECH/internet/04/24/virus.hype/index.html>

How Microsoft should treat security vulnerabilities, and how they can
build
trust:
<http://www.osopinion.com/perl/story/17344.html>

New hacker tools that evade firewalls and IDSs:
<http://news.com.com/2100-1001-887065.html>
<http://www.nwfusion.com/news/2002/0415idsevad.html>

Insiders may be the most dangerous security threat:
<http://news.zdnet.co.uk/story/0,,t269-s2108940,00.html>
<http://www.computerworld.com/securitytopics/security/story/0,10801,7011
2,00
.html>

If Microsoft made cars instead of computer programs, product liability
suits might by now have driven it out of business.  Should software
makers
be made more accountable for damage caused by faulty programs?
<http://www.economist.com/science/tq/displaystory.cfm?story_id=1020715>

Excellent discussion of national ID cards.  A must-read for anyone
involved
in the debate.
<http://books.nap.edu/html/id_questions>

The case for a national biometric database.  I don't think the author
understands security at all.
<http://www.acm.org/ubiquity/views/j_carlisle_1.html>

ISS has been running some pretty cool intrusion-detection commercials on

television.  If you've missed them, you can see them here:
<http://www.iss.net/campaigns/index.php>

This is a three-part report on bank security in general.  The first part
is
on the increase in security breaches, the second is the anatomy of a
hack,
and the third is a look at some of the reasons for insecurities in the
system.
<http://news.com.com/2009-1017-891346.html>

French company Vivendi held an electronic vote.  Afterwards, there were
stories that hackers tampered with the vote.  Others said that the vote
was
legitimate.  Now the courts are getting involved.  This underscores the
biggest problem with electronic voting: they're hackable, and there's no

way to show that they weren't.
<http://europe.cnn.com/2002/BUSINESS/04/29/vivendi.hacker/index.html>
<http://www.wired.com/news/business/0,1367,52162,00.html>
<http://www.silicon.com/a52986>
<http://www.silicon.com/a53068>
<http://www.vnunet.com/News/1131506>

Excellent essay on digital rights management and copy protection:
<http://www.reason.com/0205/fe.mg.hollywood.shtml>

The GAO released a report on "National Preparedness: Technologies to
Secure
Federal Buildings."  The report reviews a range of commercially
available
security technologies, from swipe cards to biometrics.
<http://www.gao.gov/new.items/d02687t.pdf>

Brute-force attack against credit card payments through
Authorize.net.  Attackers run random numbers through the system, and
occasionally get lucky.
<http://www.msnbc.com/news/742677.asp>

Nice article on building a taxonomy of different types of network
attacks:
<http://www.osopinion.com/perl/story/17692.html>


** *** ***** ******* *********** *************

                Counterpane News


Two piece of big news this month.  One, Counterpane was named in the Red

Herring 100:
<http://www.counterpane.com/pr-red100.html>
<http://www.redherring.com/insider/2002/0513/tech-rh100.html>

Two, we have as new distribution agreement with VeriSign.  VeriSign
offers
a portfolio of managed security services.  Two weeks ago, they added
Counterpane's Managed Security Monitoring to that portfolio.  Every
managed
service contract that VeriSign sells will include Counterpane's
monitoring.
<http://corporate.verisign.com/news/2002/pr_20020507.html>
<http://www.theregister.co.uk/content/55/25168.html>

Schneier is speaking at RSA Japan on May 29th.
<http://www.key3media.co.jp/rsa2002/eng/index.html>

Schneier is speaking at an Infraguard conference in Cleveland on June
7th.
<http://www.nocinfragard.org>

Schneier is speaking at the USENIX Annual Conference in Monterey on
6/15.
<http://www.usenix.org/events/usenix02/>

Schneier is speaking at NetSec in Chicago on 6/18, two times.


** *** ***** ******* *********** *************

          Fun with Fingerprint Readers



Tsutomu Matsumoto, a Japanese cryptographer, recently decided to look at

biometric fingerprint devices.  These are security systems that attempt
to
identify people based on their fingerprint.  For years the companies
selling these devices have claimed that they are very secure, and that
it
is almost impossible to fool them into accepting a fake finger as
genuine.  Matsumoto, along with his students at the Yokohama National
University, showed that they can be reliably fooled with a little
ingenuity
and $10 worth of household supplies.

Matsumoto uses gelatin, the stuff that Gummi Bears are made out of.
First
he takes a live finger and makes a plastic mold.  (He uses a
free-molding
plastic used to make plastic molds, and is sold at hobby shops.)  Then
he
pours liquid gelatin into the mold and lets it harden.  (The gelatin
comes
in solid sheets, and is used to make jellied meats, soups, and candies,
and
is sold in grocery stores.)  This gelatin fake finger fools fingerprint
detectors about 80% of the time.

His more interesting experiment involves latent fingerprints.  He takes
a
fingerprint left on a piece of glass, enhances it with a cyanoacrylate
adhesive, and then photographs it with a digital camera.  Using
PhotoShop,
he improves the contrast and prints the fingerprint onto a transparency
sheet.  Then, he takes a photo-sensitive printed-circuit board (PCB) and

uses the fingerprint transparency to etch the fingerprint into the
copper,
making it three-dimensional.  (You can find photo-sensitive PCBs, along
with instructions for use, in most electronics hobby shops.)  Finally,
he
makes a gelatin finger using the print on the PCB.  This also fools
fingerprint detectors about 80% of the time.

Gummy fingers can even fool sensors being watched by guards.  Simply
form
the clear gelatin finger over your own.  This lets you hide it as you
press
your own finger onto the sensor.  After it lets you in, eat the
evidence.

Matsumoto tried these attacks against eleven commercially available
fingerprint biometric systems, and was able to reliably fool all of
them.  The results are enough to scrap the systems completely, and to
send
the various fingerprint biometric companies packing.  Impressive is an
understatement.

There's both a specific and a general moral to take away from this
result.  Matsumoto is not a professional fake-finger scientist; he's a
mathematician.  He didn't use expensive equipment or a specialized
laboratory.  He used $10 of ingredients you could buy, and whipped up
his
gummy fingers in the equivalent of a home kitchen.  And he defeated
eleven
different commercial fingerprint readers, with both optical and
capacitive
sensors, and some with "live finger detection" features.  (Moistening
the
gummy finger helps defeat sensors that measure moisture or electrical
resistance; it takes some practice to get it right.)  If he could do
this,
then any semi-professional can almost certainly do much much more.

More generally, be very careful before believing claims from security
companies.  All the fingerprint companies have claimed for years that
this
kind of thing is impossible.  When they read Matsumoto's results,
they're
going to claim that they don't really work, or that they don't apply to
them, or that they've fixed the problem.  Think twice before believing
them.


Matsumoto's paper is not on the Web.  You can get a copy by asking:
Tsutomu Matsumoto <[log in to unmask]>

Here's the reference:
T. Matsumoto, H. Matsumoto, K. Yamada, S. Hoshino, "Impact of Artificial

Gummy Fingers on Fingerprint Systems," Proceedings of SPIE Vol. #4677,
Optical Security and Counterfeit Deterrence Techniques IV, 2002.

Some slides from the presentation are here:
<http://www.itu.int/itudoc/itu-t/workshop/security/present/s5p4.pdf>

My previous essay on the uses and abuses of biometrics:
<http://www.counterpane.com/crypto-gram-9808.html#biometrics>

Biometrics at the shopping center: pay for your groceries with your
thumbprint.
<http://seattlepi.nwsource.com/local/68217_thumb27.shtml>


** *** ***** ******* *********** *************

             Comments from Readers



From: "Joosten, H.J.M." <[log in to unmask]>
Subject: How to Think About Security

 > More and more, the general public is being asked to make
 > security decisions, weigh security tradeoffs, and accept
 > more intrusive security.
 >
 > Unfortunately, the general public has no idea how to do this.

People are quite capable of making security decisions.  People get
burglar
alarms, install locks, get insurance all the time.  Of course it doesn't

always help, and people may differ with respect to the security levels
they
require, but I don't see a fundamental difference in decision making.

So what IS the difference then? It's that people in "the real world"
have
an idea of what the security problems are.  Your car can get stolen.
You
can get burgled.  And so on.  They have a perception of the consequences
of
having to get a new car, having to buy stolen stuff and repairing the
entrance.

People don't have this idea with respect to information security.  They
may
go like: "So what about firewall settings? Customers don't complain,
they
pay their bill.  So WHAT are the problems that I must solve?"  People
don't
seem to feel the REAL consequences.  Within companies, this might be an
organisational issue.  For individuals, not all that much seems to go
wrong
if you stick to whatever your ISP says is good practice.

We, as security experts, keep talking of what MIGHT happen, and we're
all
too happy if some incident actually happens.  Most of these incidents
are
not actually felt by people, so they don't perceive it as their
problem.  We can frighten them by pointing to these incidents.  But then

they don't have a security problem.  Their problem is one of fear, and
this
can be gotten rid of easily by the same person that installed the
fear.  That's how some security sales can, and are made to work.

So while your "Step One: What problem does a measure solve?" is a
crucial
step, there's real work to do before that.  We should come up with a
self-help method for the general public, that they can use to assess
what
kind of problems they have, and actually perceive, from their own
perspective.  They are responsible, meaning that when things turn sour,
they're the ones that face the consequences.  Where they don't perceive
realistic consequences, they don't have problems.  If your or your
neighbour's house gets burgled, or a house in your block, that's when
you
perceive a problem, and then you're going to do something about it.
People
can do that.  They've been doing it all the time.  And then, but only
then,
is your five-step process going to be of help.



From: "John Sforza" <[log in to unmask]>
Subject: How to Think About Security

I agree that the general public (at this point in time) has limited
experience in making informed and intelligent security decisions about
infrastructure, information and privacy issues.  I however strongly
disagree that the people in the computer security arena are better at
these
just because they are struggling with the issues continually.  I would
be
much more impressed with the computer security community's competence if

there was some indication that their activity was showing hard results.

I see the computer security industry as babes in the woods when it comes
to
activities beyond their computers, software and immediate domain.  The
professions of intelligence operations, operations security, counter
intelligence, and plain spying have had live decades of experience and
centuries of operational history.  It is you who are the new kids on the

block applying historical best practices from these disciplines to
so-called computer and information security and claiming conceptual
credit.  The general computer security population is less than 15 years
old
in terms of experience, with the majority having less than ten years in
anything beyond ACLs, passwords, and basic encryption.  Almost none of
the
computer security population has any background in physical security,
covert operations, cultural context as regards security, or real world
experience beyond their cubicles.

I have also found that computer security people in general make lousy
instructors for the general public, primarily due to narrow vision,
technical arrogance, and a base failure to understand the learning
process.  Just because you are an expert in one field, do not assume
across-the-board expertise.

Finally, while your five steps are good general practices (centuries old

and documented), they are not foolproof in any formal mode.  Let me
state
that another way -- if it were that simple, why are your so-called
experts
so busy in your own yards?

Your statements continue to foster the greatest general security issue
we
face, enthusiastic amateurs with certificates being told that they are
the
greatest thing since sliced bread.



From: "John.Deters" <[log in to unmask]>
Subject: How to Think About Security

Much of security is emotional and psychological, as the events following

the 9/11 attacks prove.  The stock markets, leisure travel, and all the
other industries affected are relying on the sum of all these measures,
including the placebos, to recover.  It may be a house of cards or
window
dressing to those of us who understand security, but the vast majority
of
the population does not understand security.  They need to see "someone
doing something," because the enormous reality of the tragedy has
transcended rational thought.

Given that, I'd argue that much of what you call "placebo" does indeed
induce a positive, real effect on the country and the economy.

Is the recovery real?  I have to argue "yes."  People are boarding
planes
again.  Stock markets are slowly regaining lost value.  Even if the
market
recovery is built on a house of cards, on the imagined security gained
by
the PATRIOT act or by National Guard troops in the airports, it all acts
to
restore confidence.

So, does this mean Christmas has come early for the snake-oil
salesmen?  Yes.  Should you patriotically abandon pointing out the truth

about these systems being shams?  Of course not.  Real security is still
a
noble and desirable goal, and we all know that false security both hides

weaknesses and opens holes for attacks.  My point is that we all need to

understand the whole system that is our world, that MOST of it is driven

subjectively and emotionally by people who have never heard of
cryptography, or think security comes from a spray bottle on the belt of

the white-shirted guy in the airport.  Think about how much slower the
country might rebuild confidence if we implemented only the very few
measures that served to truly improve security.



From: [log in to unmask]
Subject: Liability and Security

With regard to your essay on "Liability and Security," I would have to
agree with you in most respects with one major exception.  After harping
on
these exact points for some time, I have concluded that it is not even
necessary for the first point (liability legislation) to exist for the
second (insurance-driven security) to come about.  In fact, given the
preponderance of evidence of the ignorance of our legislators about
security and the influence of powerful vendors on same, it may be better

that there are no laws as of yet (case law for civil liability may be a
better approach for forensics).

On the other hand, it is impossible for me to imagine that any vendor
could
convince an insurance company to give lower rates to their customers if
their product does not perform.  The insurer is driven by its view of
predicted value, based on its collected history of past performance, and

sets the premium rates and categories based on its best estimates and
its
desire for profit.  If it is not profitable to do it, then it won't do
it.  I don't even really care if a vendor pays an insurer to give
customers
a break if they use the vendor's product, as this amounts to economic
incentive for the vendor, just as reduced sales (from customers not
buying
products that cost too much to insure) does, but on the other end.  The
vendor will have to make it worth the insurer's bottom line to change
the
rates, so the "bribe" will still have to be "honest."  When it is
cheaper
to make good products than to pay for their repair and the damage they
cause (or to subsidize a third party to do so), then the vendor will go
that direction.

For a customer to buy insurance (and make it possible for the insurers
to
drive security), they must have some incentive.  This has to come from
liability, but this is liability in civil suits -- not of the vendor
directly, but of the company that uses the vendor's products.  This is
only
likely to be of consequence for institutional software consumers, but
that
is likely to be enough.

In a sort of ideal version of all this, the insurers act as Bayesian
nets
providing (partial) information on the true total cost of ownership
(TCO)
of a product.  Here, the TCO is initial cost (sales price of the
software
plus training), use cost (hard to quantify, but day-to-day cost of using
it
in terms of interoperability, user interface, etc.), maintenance cost
(keeping it running, upgrading it, adapting to new users and
configurations), and liability cost (the insurance).  Right now, the
last
item is left out most of the time.  Even the two middle terms are only
now
being given the attention they deserve, and I suspect that there is a
strong "tipping effect" on the benefits of use as the fraction of
customers
that use a particular piece of software changes (the "Betamax
factor").  This latter can provide very strong incentives to vendors NOT
to
lose market share, which can amplify the effect that insurers have.



From: "John Brooks" <[log in to unmask]>
Subject: Liability and Security

Pushing risk management towards insurance companies has two major
problems.

First, the inherent conservatism of the insurance industry.  It can take
a
very long time to get its collective head around new scenarios.  In the
meantime, it DOES accept cash from willing punters, but the cover
provided
can be worthless.  For example: "data insurance" here in UK.  This has
been
around for a long time and purports to cover all (or most) risks.  But
for
a long time the actual value of the data was calculated in bizarre ways
and
there was NO cover against the problems and extra costs caused by not
having access to it!  I expect this situation has improved more
recently,
with more insurers entering the market.

Second, the effects of monopoly.  In UK, there are a couple of "trade
organisations" close to the insurance industry with members that do
physical security system installation (e.g., NACOSS).  No doubt these
"clubs" have value, but much of this seems to be for their members
rather
than for security system users.  Pardon my cynicism -- but we're talking

basic human nature here.  Any organisation with exclusivity based on
membership or other mechanisms can create "negative influences" on the
industry or interest group(s) it is supposed to help.  Potential
parallels
with the (mainly U.S.-based) music-industry groups (RIAA, etc.) are
obvious.

I've nothing against insurance as such.  It's the quality of the risk
analysis and the number of places this is done (i.e., hopefully more
than
one!) that bother me.  Also, everyone has to bear in mind that any
insurance company's "prime directive" is: "If possible, don't pay out!"



From: Glenn Pure <[log in to unmask]>
Subject: Liability and Security

I agree fully with your point that the software industry should be
equally
liable for defects in their products as any other industry.  All the
same,
winning a damages claim for a defective security feature may be very
difficult in many cases because of the difficulty in determining the
degree
to which a software flaw contributed to the loss (in comparison with a
pile
of other software, configuration, architecture, and management/personnel

issues).

But worse, I think liability simply won't work.  Making software
companies
liable will provide them only with an incentive to write clever
licencing
agreements that absolve them of responsibility.  While you might think
this
comment is motivated by cynicism, it's not.  It's basic logic.  A
software
company faced with the prospect of blowing out its software production
costs and development times will look to an alternative means of
reducing
its liability.  I'm sure the brilliant minds in this industry will come
up
with plenty of cheaper ideas (that avoid the expensive route of actually

fixing the problem) including "creative" revision of licencing terms.

Likewise, software buyers won't be any more convinced to buy better
security products (or products with better liability provisions) if, as
you
clearly argue, they aren't particularly concerned about it in the first
place.  And to the extent they are concerned, they won't rest any easier
at
night knowing they can sue some negligent software company for a
security
bug.  That would be like arguing that stringent commercial aviation
maintenance and safety controls are no longer needed provided there's a
functional parachute under every passenger's seat!



From: [log in to unmask]
Subject: Liability and Security

I think you are overselling the wonders of liability in several
ways.  First, you are not admitting the costs of liability.  Companies
are
going to go out of business through no fault of their own, because some
jury or judge did not manage to comprehend the situation that provoked a

lawsuit correctly.  Good people will lose jobs, and good products will
be
driven off the market.  Insurance can reduce this problem, but will
never
eliminate it.  Good products made by people who just don't happen to
know
the right other people will be ignored because insurance companies won't

evaluate them.  Payouts will be awarded by juries that are totally out
of
proportion to the events they're penalizing.  This is just the
beginning.

Second, you are not admitting that insurance companies, all too often,
are
so slow to react that the security industry in brick and mortar is
largely
a joke.  Security for them is not about stopping people, or even about
catching them.  It is purely a matter of making sure the finances work
out
right.  That's exactly what happens today, too, except that different
people are paying the price in different ways.  Liability may be(and I
think is) a fairer model, on average, but it is not necessarily "more
secure" in a technical sense, and claiming otherwise is pure and simple
naivete.  Your company's services might reduce insurance rates, but odds

are that simple, stupid, and largely ineffective methods will too, and
the
cost/benefit results may or may not end up being what you would hope
for.  Remember that insurance companies face a cost tradeoff
too:  determining what measures work and how well and asking people to
pay
a certain amount up front to reduce insurance rates are costs,
and so they only do these so well, and no better -- however well is
needed
to be competitive with other insurance companies, as it happens.  Given
the
abysmal state of most insured secure facilities, it is obvious that this
is
an imperfect mechanism.

I do believe liability-based security is a good thing, but it is not a
panacea.  We all know nothing is perfect, and nobody expects it to be
perfect.



From: Todd Owen <[log in to unmask]>
Subject: Liability and Security

Your argument for software to entail legal liability is very
interesting,
and I certainly agree that the root cause of insecure software is not a
technological issue.  But it may be worth noting that what you suggest
applies to a specific type of society: namely our own Western society,
and
corporate/capitalist economic system.

The problem (as you have already pointed out) is not that companies
can't
improve the security of products, but that they don't want to.  Managers

don't want to spend time and money on improving security because of
market
pressure, and because corporate culture teaches them the mantra of
"maximise profit" at the expense of everything else.  Of course,
security
is not the only victim of this way of thinking (also called "economic
rationalism").  This business methodology also justifies a great amount
of
pollution and environmental destruction, unethical treatment of
employees
(especially workers in third world countries), and other unethical
and/or
illegal behaviour such as false advertising and the wielding of monopoly
power.

Various measures are used to combat these issues, ranging from unionism
to
legislation.  If liability can answer the problem of software quality,
then
it may be the solution we need.

However, I think that Greg Guerin (January 2002 issue) is right to be
concerned about the burden of liability on small firms and on open
source
software.  The effect of liability (and insurance) in these cases would
depend on exactly how liability was legislated.  But if it ended up
disadvantaging open source software then I would find this sadly ironic
because the Free Software movement is in part a response to the
corporate
"greed is good" mentality which seems to me to be the root cause of the
software quality problem.

Another point is that software liability would encourage the "litigious
mindset" of the modern world (especially the USA).  Would we have
opportunistic lawsuits holding the software manufacturer liable for a
network intrusion even though the root password was set to "password"?

I think that, in the long term, a move aimed at encouraging corporations
to
actually care about their customers (rather than profit only) would be
more
beneficial than forcing them to pay attention to quality through legal
measures.  But that would require a lot more than political lobbying to
achieve.



From: "Tousley, Scott W." <[log in to unmask]>
Subject:  2002 CSI/FBI Computer Crime Survey

I think this year's study continues the slide away from factual response

summary and towards issue advocacy.  The advocacy is so strongly
presented
that the neutral reader starts to put this "report" aside as just
another
sponsored item of marketing.

Once again, I am very disappointed to see no effort to normalize the
reported criminal and malicious activity against background trends of
economic activity, electronic commerce, etc.  I continue to read all of
the
CSI "studies" as indicating a relatively constant level of malicious
activity, where the apparent growth reported in the surveys is almost
entirely proportional to and explained by the increasing presence of
network-related economic activity, increasing awareness of computer
security issues, etc.  I think CSI is missing a huge opportunity here,
because if they re-baselined the information by factoring the background

out, they could then address likely trends in a more objective sense,
and
with more credibility from the neutral reader.  This sustained CSI
effort
has a significant "first-move" advantage in tracking these commercial
impact trends, and I would regret seeing it frittered away by losing
quality and focus on the core reported information.



David Haworth <[log in to unmask]>
Subject: CBDTPA

In all the articles I've read criticizing the CBDTPA, I've never seen
anyone write about one of its consequences: that it might place the USA
in
violation of the WIPO treaties that very carefully and overreachingly
implemented with the DMCA.

In the "Agreed Statements" attached to Article 12 of the WIPO copyright
treaty (the article that requires legal remedies against the removal of
digital rights management information), it clearly states:

"It is further understood that Contracting Parties will not rely on this

Article to devise or implement rights management systems that would have

the effect of imposing formalities which are not permitted under the
Berne
Convention or this Treaty, prohibiting the free movement of goods or
impeding the enjoyment of rights under this Treaty."

The CBDTPA, coupled with Microsoft's DRM-OS patent and no doubt a whole
cartload of other software patents that are unenforceable outside the
U.S.,
would provide exactly the kind of barrier to free movement of goods that

the U.S., in agreeing to the statement, contracted not to do.



From: [log in to unmask] (Kragen Sitaker)
Subject: SNMP Vulnerabilities

In the April issue of Crypto-Gram, Bancroft Scott wrote:
 > If applications that use ASN.1 are properly implemented
 > and tested they are as safe as any other properly
 > implemented and tested application.

If, by "properly implemented," you mean "correct," then your statement
is
probably vacuously true; I would be very surprised if there were any
programs complex enough to use ASN.1 that were free of bugs.

If, on the other hand, you mean "implemented according to current best
programming practices," then your statement is still probably vacuously
true.  Current best programming practices are probably those practiced
by
the onboard shuttle software group, which has a defect rate on the order
of
one bug per 10,000 lines of code; they cost on the order of a million
dollars (times or divided by five -- I'd be delighted to get more
accurate
numbers) per line of code, about 3,000 times the normal amount.  I don't

think any ASN.1 programs have been implemented in this manner, but maybe

I'm wrong.  Maybe the onboard shuttle system uses ASN.1.

If, by "properly implemented," you mean "implemented by reasonably
competent programmers using reasonable precautions," then you are
completely wrong.  "Properly implemented" programs (by this definition)
contain bugs, usually very many bugs, but some of them contain many more

bugs than others.  The prevalence of bugs in a particular "properly
implemented" program is influenced very strongly by its internal
complexity, which is largely a function of the complexity required by
its
specification.

If ASN.1 requires greater complexity than the alternatives from programs

that use it, then programs that use ASN.1 will contain more bugs; some
fraction of these bugs will be security vulnerabilities.  On the other
hand, if using ASN.1 is simpler than using the alternatives, then
programs
that use it will contain fewer bugs.  I do not know enough about ASN.1
to
know which of these is true.

The "and tested" part is an irrelevant distraction; testing is a
remarkably
ineffective way to reduce the number of security vulnerabilities in
software.  Other practices, such as featurectomy, ruthless
simplification,
code reviews, design by contract, and correctness arguments, are much
more
effective.


** *** ***** ******* *********** *************


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on computer security and cryptography.  Back
issues are available on <http://www.counterpane.com/crypto-gram.html>.

To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or
send a
blank message to [log in to unmask]  To
unsubscribe,
visit <http://www.counterpane.com/unsubform.html>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who
will
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long
as
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO
of
Counterpane Internet Security Inc., the author of "Secrets and Lies" and

"Applied Cryptography," and an inventor of the Blowfish, Twofish, and
Yarrow algorithms.  He is a member of the Advisory Board of the
Electronic
Privacy Information Center (EPIC).  He is a frequent writer and lecturer
on
computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed
Security
Monitoring.  Counterpane's expert security analysts protect networks for

Fortune 1000 companies world-wide.

<http://www.counterpane.com/>

Copyright (c) 2002 by Counterpane Internet Security, Inc.

************************************************************************************
Distributed through Cyber-Society-Live [CSL]: CSL is a moderated discussion
list made up of people who are interested in the interdisciplinary academic
study of Cyber Society in all its manifestations.To join the list please visit:
http://www.jiscmail.ac.uk/lists/cyber-society-live.html
*************************************************************************************

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
June 2022
May 2022
March 2022
February 2022
October 2021
July 2021
June 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
July 2020
June 2020
May 2020
April 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager