JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE Archives

CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE  2004

CYBER-SOCIETY-LIVE 2004

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

[CSL]: CRYPTO-GRAM, January 15, 2004

From:

J Armitage <[log in to unmask]>

Reply-To:

Interdisciplinary academic study of Cyber Society <[log in to unmask]>

Date:

Fri, 16 Jan 2004 08:14:15 -0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (1090 lines)

From: Bruce Schneier [mailto:[log in to unmask]]
Sent: 15 January 2004 09:40
To: [log in to unmask]
Subject: CRYPTO-GRAM, January 15, 2004


                  CRYPTO-GRAM

               January 15, 2004

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            [log in to unmask]
            <http://www.schneier.com>
           <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

Back issues are available at
<http://www.schneier.com/crypto-gram.html>. To subscribe, visit
<http://www.schneier.com/crypto-gram.html> or send a blank message to
[log in to unmask]


** *** ***** ******* *********** *************

In this issue:
      Color-Coded Terrorist Threat Levels
      Crypto-Gram Reprints
      Fingerprinting Foreigners
      News
      Terrorists and Almanacs
      Counterpane News
      More "Beyond Fear" Reviews
      Security Notes from All Over: President Musharraf and
        Signal Jammers
      WEIS
      New Credit Card Scam
      Diverting Aircraft and National Intelligence
      Comments from Readers


** *** ***** ******* *********** *************

       Color-Coded Terrorist Threat Levels


 From 21 December 2003 to 9 January 2004, the national threat level --
as established by the U.S. Department of Homeland Security -- was
Orange. Orange is one level above Yellow, which is as low as the
threat level has gotten since the scale was established in the months
following 9/11. There are two levels below Yellow. There's one level
above Orange.

This is what I wrote in Beyond Fear: "The color-coded threat alerts
issued by the Department of Homeland Security are useless today, but
may become useful in the future. The U.S. military has a similar
system; DEFCON 1-5 corresponds to the five threat alerts levels: Green,
Blue, Yellow, Orange, and Red. The difference is that the DEFCON
system is tied to particular procedures; military units have specific
actions they need to perform every time the DEFCON level goes up or
down. The color-alert system, on the other hand, is not tied to any
specific actions. People are left to worry, or are given nonsensical
instructions to buy plastic sheeting and duct tape. Even local police
departments and government organizations largely have no idea what to
do when the threat level changes. The threat levels actually do more
harm than good, by needlessly creating fear and confusion (which is an
objective of terrorists) and anesthetizing people to future alerts and
warnings. If the color-alert system became something better defined,
so that people know exactly what caused the levels to change, what the
change means, and what actions they need to take in the event of a
change, then it could be useful. But even then, the real measure of
effectiveness is in the implementation. Terrorist attacks are rare,
and if the color-threat level changes willy-nilly with no obvious cause
or effect, then people will simply stop paying attention. And the
threat levels are publicly known, so any terrorist with a lick of sense
will simply wait until the threat level goes down."

Living under Orange reinforces this. It didn't mean anything. Tom
Ridge's admonition that Americans "be alert, but go about their
business" reinforces this; it's nonsensical advice. I saw little that
could be considered a good security trade-off, and a lot of draconian
security measures and security theater.

I think the threat levels are largely motivated by politics. There are
two possble reasons for the alert.

Reason 1: CYA. Governments are naturally risk averse, and issuing
vague threat warnings makes sense from that perspective. Imagine if a
terrorist attack actually did occur. If they didn't raise the threat
level, they would be criticized for not anticipating the attack. As
long as they raised the threat level they could always say "We told you
it was Orange," even though the warning didn't come with any practical
advice for people.

Reason 2: To gain Republican votes. The Republicans spent decades
running on the "Democrats are soft on Communism" platform. They've
just discovered the "Democrats are soft on terrorism" platform. Voters
who are constantly reminded to be fearful are more likely to vote
Republican, or so the theory goes, because the Republicans are viewed
as the party that is more likely to protect us.

(These reasons may sound cynical, but I believe that the Administration
has not been acting in good faith regarding the terrorist threat, and
their pronouncements in the press have to be viewed under that light.)

I can't think of any real security reasons for alerting the entire
nation, and any putative terrorist plotters, that the Administration
believes there is a credible threat.


** *** ***** ******* *********** *************

             Crypto-Gram Reprints



Crypto-Gram is currently in its seventh year of publication. Back
issues cover a variety of security-related topics, and can all be found
on <http://www.schneier.com/crypto-gram.html>. These are a selection
of articles that appeared in this calendar month in other years.

Militaries and Cyber-War:
<http://www.schneier.com./crypto-gram-0301.html#1>

A cyber Underwriters Laboratories?
<http://www.schneier.com/crypto-gram-0101.html#1>

Code signing:
<http://www.schneier.com/crypto-gram-0101.html#10>

Block and stream ciphers:
<http://www.schneier.com/crypto-gram-0001.html#Blockbuster>


** *** ***** ******* *********** *************

           Fingerprinting Foreigners



Imagine that you're going on vacation to some exotic country. You get
your visa, plan your trip, and take a long flight. How would you feel
if, at the border, you were photographed and fingerprinted? How would
you feel if your biometrics stayed in that country's computers for
years? If your fingerprints could be sent back to your home
country? Would you feel welcomed by that country, or would you feel
like a criminal?

This week the U.S. government began doing just that to an expected 23
million visitors to the U.S. The US-VISIT program is designed to
capture biometric information at our borders. Only citizens of 27
countries who don't need a visa to enter the U.S., mostly in Europe,
are exempt. Currently all 115 international airports and 14 seaports
are covered, and over the next three years this program will be
expanded to cover at least 50 land crossings, and also to screen
foreigners exiting the country.

None of this comes cheaply. The program cost $380 million in 2003 and
will cost at least the same in 2004. But that's just the start; the
Department of Homeland Security's total cost estimate nears $10 billion.

According to the Bush administration, the measures are designed to
combat terrorism. As a security expert, it's hard for me to see
how. The 9/11 terrorists would not have been deterred by this system;
many of them entered the country legally on valid passports and
visas. We have a 5,500-mile long border with Canada, and another
2,000-mile long border with Mexico. Two-to-three-hundred thousand
people enter the country illegally each year from
Mexico. Two-to-three-million people enter the country legally each
year and overstay their visas. Capturing the biometric information of
everyone entering the country doesn't make us safer.

And even if we could completely seal our borders, fingerprinting
everyone still wouldn't keep terrorists out. It's not like we can
identify terrorists in advance. The border guards can't say "this
fingerprint is safe; it's not in our database" because there is no
comprehensive fingerprint database for suspected terrorists.

More dangerous is the precedent this program sets. Today the program
only affects foreign visitors with visas. The next logical step is to
fingerprint all visitors to the U.S., and then everybody, including
U.S. citizens.

Following this train of thought quickly leads to sinister
speculation. There's no reason why the program should be restricted to
entering and exiting the country; why shouldn't every airline flight be
"protected?" Perhaps the program can be extended to train rides, bus
rides, entering and exiting government buildings. Ultimately the
government will have a biometric database of every U.S. citizen--face
and fingerprints--and will be able to track their movements. Do we
want to live in that kind of society?

Retaliation is another worry. Brazil is now fingerprinting Americas
who visit that country, and other countries are expected to follow
suit. All over the world, totalitarian governments will use the our
fingerprinting regime to justify fingerprinting Americans who enter
their countries. This means that your prints are going to end up on
file with every tin-pot dictator from Sierra Leone to Uzbeckistan. And
Tom Ridge has already pledged to share security information with other
countries.

Security is a trade-off. When deciding whether to implement a security
measure, we must balance the costs against the benefits. Large-scale
fingerprinting is something that doesn't add much to our security
against terrorism, costs an enormous amount of money that could be
better spent elsewhere. Allocating the funds on compiling, sharing,
and enforcing the terrorist watch list would be a far better security
investment. As a security consumer, I'm getting swindled.

America's security comes from our freedoms and our liberty. For over
two centuries we have maintained a delicate balance between freedom and
the opportunity for crime. We deliberately put laws in place that
hamper police investigations, because we know we are a more secure
because of them. We know that laws regulating wiretapping, search and
seizure, and interrogation make us all safer, even if they make it
harder to convict criminals.

The U.S. system of government has a basic unwritten rule: the
government should be granted only limited power, and for limited
purposes, because of the certainty that government power will be
abused. We've already seen the US-PATRIOT Act powers granted to the
government to combat terrorism directed against common
crimes. Allowing the government to create the infrastructure to
collect biometric information on everyone it can is not a power we
should grant the government lightly. It's something we would have
expected in former East Germany, Iraq, or the Soviet Union. In all of
these countries greater government control meant less security for
citizens, and the results in the U.S. will be no different. It's bad
civic hygiene to build an infrastructure that can be used to facilitate
a police state.


A version of this essay originally appeared in Newsday.
<http://www.newsday.com/news/opinion/ny-vpsch143625202jan14,0,1880923.st
ory> or <http://tinyurl.com/2yy7t>

Office of Homeland Security webpage for the program:
<http://www.dhs.gov/dhspublic/interapp/editorial/editorial_0333.xml>

News articles:
<http://www.washtimes.com/national/20031201-115121-4339r.htm>
<http://www.washtimes.com/national/20031027-112510-5818r.htm>
<http://www.nytimes.com/reuters/news/news-security-usa-visas.html>
<http://gcn.com/vol1_no1/daily-updates/24536-1.html>
<http://www.sunspot.net/news/custom/attack/bal-airport0106,0,42711.story>
<http://www.cnn.com/2004/US/01/04/visit.program/>
<http://www.nytimes.com/2004/01/05/national/05CND-SECU.html>
<http://www.ilw.com/lawyers/immigdaily/doj_news/2004,0106-hutchinson.shtm>
<http://www.theage.com.au/articles/2004/01/06/1073268031785.html>
<http://www.thestar.co.za/index.php?fSectionId=132&fArticleId=318749>
<http://www.ilw.com/lawyers/articles/2003,1231-krikorian.shtm>

Opinions:
<http://news.mysanantonio.com/story.cfm?xla=saen&xlb=1020&xlc=1074396>
<http://www.rockymountainnews.com/drmn/opinion/article/0,1299,DRMN_38_24
75765,00.html> or <http://tinyurl.com/3bqze>
<http://www.shusterman.com/pdf/advocacy61703.pdf>
<http://www.washingtontechnology.com/ad_sup/homeland-coalition/2.html>

Brazil fingerprints U.S. citizens in retaliation:
<http://reprints.msnbc.com/id/3875747/>


** *** ***** ******* *********** *************

                      News



Yahoo is planning on combating spam by requiring e-mail to be
authenticated. The problem, they claim, is that there's no way of
knowing who the sender really is. It seems obvious to me that this
won't stop spam at all. Spammers are already breaking into computers
and hijacking legitimate users' e-mail systems. Spammers are already
sending mail out of random countries and stolen accounts. How exactly
will this make things better?
<http://www.newscientist.com/news/news.jsp?id=ns99994459>
<http://edition.cnn.com/2003/TECH/internet/12/05/spam.yahoo.reut/>

Regularly I've written that secrecy is more often harmful to security
than helpful. This article discusses that: the Bush Administration is
using terrorism as an excuse to keep many aspects of government secret,
but the real threat is more often the government itself.
<http://www.usnews.com/usnews/issue/031222/usnews/22secrecy.htm>

Here's some good Microsoft news. The new update to Windows XP will
include the Internet Connection Firewall (ICF). It will be on by
default and more rigorous in its protection. Seems like a security
improvement to me.
<http://www.eweek.com/article2/0,4149,1413404,00.asp>

OnStar, the communications and navigation system in GM cars, can be
used to surreptitiously eavesdrop on passengers:
<http://www.newsmax.com/archives/articles/2003/12/10/213653.shtml>

More TSA absurdity:
<http://www.post-gazette.com/pg/03362/255283.stm>

And the British react to a decision to put sky marshals on selected
flights into the U.S.:
<http://www.channelnewsasia.com/stories/afp_world/view/64011/1/.html>

Interesting article on a computer security researcher who is using
biological metaphors in an effort to create next-generation
computer-security tools. This is interesting work, but I am skeptical
about a lot of it. The biological metaphor works only marginally well
in the computer world. Certainly the monoculture argument makes sense
in the computer world, but biological security is generally based on
sacrificing individuals for the good of the species -- which doesn't
really apply in the computer world.
<http://www.computerworld.com/securitytopics/security/story/0,10801,8835
9,00.html> or <http://tinyurl.com/2bwah>

There's two interesting aspects to this case. First, the judge ruled
that a player has a claim of ownership to virtual property in computer
game. And second, the software company was partially liable for
damages because of bugs in their code. The case was in China, which
isn't much of a precedent for the rest of the world, but it is still
interesting news.
<http://story.news.yahoo.com/news?tmpl=story&u=/nm/20031219/wr_nm/entert
ainment_china_hacker_dc_1> or <http://tinyurl.com/3xsfq>

An interesting blackmail story. "Cyber blackmail artists are shaking
down office workers, threatening to delete computer files or install
pornographic images on their work PCs unless they pay a ransom, police
and security experts said."
<http://www.cnn.com/2003/TECH/internet/12/29/cyber.blackmail.reut/index.
html> or <http://tinyurl.com/3924u>

An article on the future of computer security. The moral is identical
to what I've been saying: things will get better eventually, but before
that things will get worse:
<http://www.computerworld.com/newsletter/0,4902,88646,00.html>

A story about security in the National Football League:
<http://www.csoonline.com/read/010104/nfl.html>

How to hack password-protected MS-Word documents. Not only can you
view protected documents, you can also make changes to them and
reprotect them. This is a huge security vulnerability.
<http://www.securityfocus.com/archive/1/348692/2004-01-02/2004-01-08/0>

Last month Bush snuck into law one of the provisions of the failed
PATRIOT ACT 2. The FBI can now obtain records from financial
institutions without requiring permission from a judge. The
institution can't tell the target person that his records were taken by
the FBI. And the term "financial institution" has been expanded to
include insurance companies, travel agencies, real estate agents,
stockbrokers, the U.S. Postal Service, jewelry stores, casinos, and car
dealerships.
<http://www.wired.com/news/privacy/0,1848,61792,00.html>

Adobe has special code in its products to prevent counterfeiting. I
think this is a great security countermeasure. It's not designed to
defend against the professional counterfeiters, with their counterfeit
plates and special paper. It's designed to defend against the amateur
counterfeiter, the hobbyist. Color copiers have had
anti-counterfeiting defenses for years. Raising the bar is a great
defense here.
<http://www.miami.com/mld/miamiherald/news/breaking_news/7674024.htm>


** *** ***** ******* *********** *************

           Terrorists and Almanacs



It's so bizarre it's almost self-parody. The FBI issued a warning to
police across the nation to be on the watch for people carrying
almanacs, because terrorists may use these reference books "to assist
with target selection and pre-operational planning."

Gadzooks! People with information are not suspect. Almanacs,
reference books, and informational websites are not dangerous tools
that aid terrorism. They're useful tools for all of us, and people who
use them are better people because of them. I worry about alerts like
these, because they reinforce the myth that information is inherently
dangerous.

The FBI's bulletin:
<http://cryptome.org/fbi-almanacs.htm>

News article:
<http://www.sfgate.com/cgi-bin/article.cgi?file=/news/archive/2003/12/29
/national1426EST0580.DTL> or <http://tinyurl.com/29lxw>

Clever commentary:
<http://nielsenhayden.com/makinglight/archives/004361.html#004361>


** *** ***** ******* *********** *************

                Counterpane News



Counterpane continues to offer its Enterprise Protection Suite, which
combines Managed Security Monitoring with Managed Vulnerability
Scanning, fully outsourced Device Management, and Security Consulting
services, at a 15% discount to Crypto-Gram readers (and, by extension,
everyone):
<http://www.counterpane.com/cgi-bin/enterprise.cgi>

EMEA press release:
<http://www.counterpane.com/pr-20031217.html>

Schneier was chosen as Best Industry Spokesman by Info Security Magazine:
<http://infosecuritymag.techtarget.com/ss/0,295796,sid6_iss288_art514,00
.html> or <http://tinyurl.com/39wcg>

Q&A with Schneier in Infoworld:
<http://www.infoworld.com/article/03/12/12/49FEiw25luminaries_3.html>

Schneier essay on Blaster and the blackout (Salon.com):
<http://www.salon.com/tech/feature/2003/12/16/blaster_security/index_np.
html> or <http://tinyurl.com/26gpn>

Schneier op-ed essay on semantic attacks (San Jose Mercury News):
<http://www.bayarea.com/mld/mercurynews/7529172.htm>

Schneier's op-ed essay on casual surveillance and the loss of personal
privacy (Minneapolis Star-Tribune):
<http://www.startribune.com/stories/1519/4278339.html>


** *** ***** ******* *********** *************

           More "Beyond Fear" Reviews


"Beyond Fear" continues to sell well. The book is going into its
second printing, so if it's not at your local bookstore, be patient for
a couple of weeks.

A new review:
<http://www.vnunet.com/Analysis/1151575>

Two different reviews from Computing Reviews:
<http://www.reviews.com/navigation.cfm?targetpage=Review&media_id=154298
7&review_id=128744> or <http://tinyurl.com/2l377>
<http://www.reviews.com/navigation.cfm?targetpage=Review&media_id=154298
7&review_id=128676> or <http://tinyurl.com/276be>

Book's website:
<http://www.schneier.com/bf.html>


** *** ***** ******* *********** *************

        Security Notes from All Over:
    President Musharraf and Signal Jammers



Attackers wired a bridge in Pakistan with explosives, intending to
detonate it when President Musharraf's motorcade drove over it. But
according to a Pakistani security official, "The presidential motorcade
has special jamming equipment, which blocks all remote-controlled
devices in a 200-metre radius."

Unfortunately, by publishing this information in the paper, the jamming
equipment is unlikely to save him next time.

It's rare that secrecy is good for security, but this is an example of
it. Musharraf's jamming equipment was effective precisely because the
attackers didn't expect it. Now that they know about it, they're going
to use some other detonation mechanism: wires, cell phone
communications, timers, etc.

But maybe none of this is true.

Think about it: if the jamming equipment worked, why would the
Pakistani security tell the press? There are several reasons. One,
the bad guys found out about it, either when their detonator didn't
work or through some other mechanism, so they might as well tell
everybody. Two, to make the bad guys wonder what other countermeasures
the motorcade has. Or three, because the press story is so cool that
it's worth rendering the countermeasure ineffective. None of these
explanations seems very likely.

There's another possible explanation: there's no jamming
equipment. The detonation failed for some other, unexplained, reason,
and Pakistani security forces are pretending that they can block remote
detonations.

Deception is another excellent security countermeasure, and one
that--at least to me--is a more likely explanation of events.

<http://www.salon.com/news/wire/2003/12/17/musharraf/>


** *** ***** ******* *********** *************

                     WEIS



The Third Workshop on Economics and Information Security will be held
in Minneapolis in May. This is currently my favorite security
conference. I think that economics has a lot to teach computer
security, and it is very interesting to get economists, lawyers, and
computer security experts in the same room talking about issues.

Conference website:
<http://www.dtc.umn.edu/weis2004>

Websites for the First and Second Workshops, including many of the
papers presented:
<http://www.sims.berkeley.edu/resources/affiliates/workshops/econsecurit
y/> or <http://tinyurl.com/2t2gn>
<http://www.cpppe.umd.edu/rhsmith3/index.html>


** *** ***** ******* *********** *************

            New Credit Card Scam



This one is clever.

You receive a telephone call from someone purporting to be from your
credit card company. They claim to be from something like the security
and fraud department, and question you about a fake purchase for some
amount close to $500.

When you say that the purchase wasn't yours, they tell you that they're
tracking the fraudsters and that you will receive a credit. They tell
you that the fraudsters are making fake purchases on cards for amounts
just under $500, and that they're on the case.

They know your account number. They know your name and address. They
continue to spin the story, and eventually get you to reveal the three
extra numbers on the back of your card.

That's all they need. They then start charging your card for amounts
just under $500. When you get your bill, you're unlikely to call the
credit card company because you already know that they're on the case
and that you'll receive a credit.

It's a really clever social engineering attack. They have to hit a lot
of cards fast and then disappear, because otherwise they can be
tracked, but I bet they've made a lot of money so far.


** *** ***** ******* *********** *************

   Diverting Aircraft and National Intelligence



Security can fail in two different ways. It can fail to work in the
presence of an attack: a burglar alarm that a burglar successfully
defeats. But security can also fail to work correctly when there's no
attack: a burglar alarm that goes off even if no one is there.

Citing "very credible" intelligence regarding terrorism threats, U.S.
intelligence canceled 15 international flights in the last couple of
weeks, diverted at least one more flight to Canada, and had F-16s
shadow others as they approached their final destinations.

These seem to have been a bunch of false alarms. Sometimes it was a
case of mistaken identity. For example, one of the "terrorists" on an
Air France flight was a child whose name matched that of a terrorist
leader; another was a Welsh insurance agent. Sometimes it was a case
of assuming too much; British Airways Flight 223 was detained once and
canceled twice, on three consecutive days, presumably because that
flight number turned up on some communications intercept somewhere. In
response to the public embarrassment from these false alarms, the
government is slowly leaking information about a particular person who
didn't show up for his flight, and two non-Arab-looking men who may or
may not have had bombs. But these seem more like efforts to save face
than the very credible evidence that the government promised.

Security involves a trade-off: a balance of the costs and
benefits. It's clear that canceling all flights, now and forever,
would eliminate the threat from air travel. But no one would ever
suggest that, because the trade-off is just too onerous. Canceling a
few flights here and there seems like a good trade-off because the
results of missing a real threat are so severe. But repeatedly
sounding false alarms entails security problems, too. False alarms are
expensive -- in money, time, and the privacy of the passengers affected
-- and they demonstrate that the "credible threats" aren't credible at
all. Like the boy who cried wolf, everyone from airport security
officials to foreign governments will stop taking these warnings
seriously. We're relying on our allies to secure international
flights; demonstrating that we can't tell terrorists from children
isn't the way to inspire confidence.

Intelligence is a difficult problem. You start with a mass of raw
data: people in flight schools, secret meetings in foreign countries,
tips from foreign governments, immigration records, apartment rental
agreements, phone logs and credit card statements. Understanding these
data, drawing the right conclusions -- that's intelligence. It's easy
in hindsight but very difficult before the fact, since most data is
irrelevant and most leads are false. The crucial bits of data are just
random clues among thousands of other random clues, almost all of which
turn out to be false or misleading or irrelevant.

In the months and years after 9/11, the U.S. government has tried to
address the problem by demanding (and largely receiving) more
data. Over the New Year's weekend, for example, federal agents
collected the names of 260,000 people staying in Las Vegas
hotels. This broad vacuuming of data is expensive, and completely
misses the point. The problem isn't obtaining data, it's deciding
which data is worth analyzing and then interpreting it. So much data
is collected that intelligence organizations can't possibly analyze it
all. Deciding what to look at can be an impossible task, so
substantial amounts of good intelligence go unread and unanalyzed. Data
collection is easy; analysis is difficult.

Many think the analysis problem can be solved by throwing more
computers at it, but that's not the case. Computers are dumb. They
can find obvious patterns, but they won't be able to find the next
terrorist attack. Al-Qaida is smart, and excels in doing the
unexpected. Osama bin Laden and his troops are going to make mistakes,
but to a computer, their "suspicious" behavior isn't going to be any
different than the suspicious behavior of millions of honest
people. Finding the real plot among all the false leads requires human
intelligence.

More raw data can even be counterproductive. With more data, you have
the same number of "needles" and a much larger "haystack" to find them
in. In the 1980s and before, East German police collected an enormous
amount of data on 4 million East Germans, roughly a quarter of their
population. Yet even they did not foresee the peaceful overthrow of
the Communist government; they invested too heavily in data collection
while neglecting data interpretation.

In early December, the European Union agreed to turn over detailed
passenger data to the U.S. In the few weeks that the U.S. has had this
data, we've seen 15 flight cancellations. We've seen investigative
resources chasing false alarms generated by computer, instead of
looking for real connections that may uncover the next terrorist
plot. We may have more data, but we arguably have a worse security system.

This isn't to say that intelligence is useless. It's probably the best
weapon we have in our attempts to thwart global terrorism, but it's a
weapon we need to learn to wield properly. The 9/11 terrorists left a
huge trail of clues as they planned their attack, and so, presumably,
are the terrorist plotters of today. Our failure to prevent 9/11 was a
failure of analysis, a human failure. And if we fail to prevent the
next terrorist attack, it will also be a human failure.

Relying on computers to sift through enormous amounts of data, and
investigators to act on every alarm the computers sound, is a bad
security trade-off. It's going to cause an endless stream of false
alarms, cost millions of dollars, unduly scare people, trample on
individual rights and inure people to the real threats. Good
intelligence involves finding meaning among enormous reams of
irrelevant data, then organizing all those disparate pieces of
information into coherent predictions about what will happen next. It
requires smart people who can see connections, and access to
information from many different branches of government. It can't be
seen by the various individual pieces of bureaucracy; the whole picture
is larger than any of them.

These airline disruptions highlight a serious problem with U.S.
intelligence. There's too much bureaucracy and not enough
coordination. There's too much reliance on computers and
automation. There's plenty of raw material, but not enough
thoughtfulness. These problems are not new; they're historically
what's been wrong with U.S. intelligence. These airline disruptions
make us look like a bunch of incompetents who cry wolf at the slightest
provocation.


This essay originally appeared in Salon.
<http://www.salon.com/opinion/feature/2004/01/09/security/>

News articles:
<http://www.usnews.com/usnews/issue/040112/usnews/12aviation.htm>
<http://www.napanews.com/templates/index.cfm?template=story_full&id=17DB
7A0D-A348-43AF-967D-8D2257117047> or <http://tinyurl.com/2utml>
<http://www.startribune.com/stories/484/4298735.html>
<http://www.contracostatimes.com/mld/cctimes/news/7635827.htm>
<http://www.reuters.com/newsArticle.jhtml?type=reutersEdge&storyID=4073670>
<http://www.smh.com.au/articles/2004/01/03/1072908948066.html>
<http://www.usatoday.com/news/world/2004-01-07-france-missed-flight_x.htm>


** *** ***** ******* *********** *************

               Comments from Readers



From: [log in to unmask]
Subject: Blaster and the August 14th Blackout

I just read your article, and have an additional question worth knowing
about.

The article's hypothesis is that the massive blackout was indirectly
aided by alarm systems that failed due to MS Blast, and these failed
alarm systems allowed other equipment failures and adverse conditions
to go undetected by the power operators. Because the technicians
didn't know about the adverse conditions, their hands were tied and
massive cascading failure resulted.

My question is, under normal circumstances, assuming the alarm systems
are operational, how often do equipment failures or adverse conditions
normally occur such that the alarm systems detect them in time, and
humans can intervene and head off massive cascading failures?

I suspect that if the computers were working that day, the technicians
would have learned about the alarm conditions, and they could have
headed off the catastrophe. I just want to know how likely these alarm
conditions occur on a day-to-day basis.

In other words, how many problems occur that we, the general public,
don't ever hear about?

If we knew this probability metric, we could assess the relative hazard
of worms leading to widespread blackouts as a function of alarm
condition probability and alarm system/Internet interconnectedness.

I don't expect anyone to come forward to corroborate your hypothesis,
as that would be tantamount to an admission of failure by the
responsible IS/security staff, and likely grounds for
dismissal. Perhaps some lone whistle blower might come out much later.



From: Andrew Odlyzko <[log in to unmask]>
Subject: Computerized and Electronic Voting

The voting booth does provide some security against bribery and
coercion, but only as long as we can stop camera phones from being used
in them!



From: Fred Heutte <[log in to unmask]>
Subject: Computerized and Electronic Voting

Thanks for your cogent thoughts on ballot security. I almost
completely agree and was one of the first signers of David Dill's
petition. I am also involved professionally in voter data -- from the
campaign side, with voter files, not directly with voting equipment --
but we're close enough to the vote counting process to see how it
actually works.

I would only disagree slightly in one area. Absentee voting is quite
secure when looking at the overall approach and assessing the risks in
every part of the process. As long as reasonable precautions like
signature checking are done, it would be difficult and expensive to
change the results of mail voting significantly.

For example, in Oregon, ballots are returned in an inside security
envelope which is sealed by the voter. The outside envelope has a
signature area on the back side. This is compared to the voter's
signature on file at the elections office. The larger counties
actually do a digitized comparison, and back that up with a manual
comparison with a stratified random sample (to validate machine results
on an ongoing basis), as well as a final determination for any
questionable matches.

Certainly it is possible to forge a signature. However, this
authentication process would greatly raise the cost of forged mail
ballots, absent consent of the voter. In turn, interference or
coercion with absentee voting would require much higher travel costs
(at least) than doing so at a polling place, for a given change in the
outcome.

It is true that precincts have poll watchers, and absentee voters do
not. But consider this. Ballot boxes, which are often delivered by
temporary poll workers from the precinct to the elections office, are
occasionally stolen, but mail ballots are handled within a vast stream
of other mail by employees with paychecks and pensions at stake. The
relatively low level of mail fraud inside the postal system is a
testament to its relative security, and the points where ballots are
aggregated for delivery to the elections office are usually on public
property and can also be watched by outside observers if need be.

Oregon has had some elections with 100% "vote by mail" since 1996, and
all elections since 1999. So far, no verifiable evidence of voter
fraud has emerged, despite many checks and some predictions by those
with a political axe to grind that we would be engulfed in a wave of
election fixing.

The reality is that Oregon's system, which is based on some
common-sense security principles, has proven to be robust. The one
lingering problem has been the need of some counties to make their
voters use punch cards at home because of their antiquated vote
counting equipment. But while this is a vote integrity issue -- since
state statistics show a much higher undervote and spoiled ballot total
for punch cards as compared to mark-sense ballots -- it is not a
security issue per se. And with Help America Vote Act (HAVA) funding
to convert to more modern vote counting systems, the Oregon chad
remains in only one county and will go extinct after 2004.

The mark-sense ("fill in the ovals") ballots we have work well, and
have low rates of over-votes and under-votes, despite the lack of
automated machine checking that is possible in well-designed precinct
voting systems. This suggests that reasonable visual design and
human-friendly paper and pencil/pen home voting is a very reliable and
secure system. When aided by automated counting equipment, we even
have the additional benefit of very fast initial counts.

The increase in voter participation in Oregon since the advent of
vote-by-mail -- 10 to 30 percentage points above national averages,
depending on the kind of election -- leads to the only other issue,
which is slow machine counts on election night after the polls close
due to the surge of late ballots received at drop-off locations around
the state. Oregon in fact isn't really "vote by mail," it's
vote-at-home, with a paper ballot that can be mailed or left at any
official drop-off point in the state, including county election
offices, many schools and libraries, malls, town squares, etc.

The great advantage of the Oregon system is that it relies on the
principle that if you appeal to the best instincts of the citizen, the
overwhelming majority will "do our part" to ensure the integrity of the
democratic voting process, whether it is full consideration of the
candidates and issues before voting, watching to make sure all ballots
are securely transferred and counted, or favoring those laws and
policies that insure that everyone eligible can vote, that their votes
are counted, and that the candidates and measures with the most votes win.

The system is also cheaper than running traditional precinct
elections. What's not to like?



From: Paul Rubin <[log in to unmask]>
Subject: gambling machines vs. voting machines

The document at <http://gaming.state.nv.us/forms/frm141.pdf> shows what
anyone designing a new gambling machine (e.g. video poker machine) has
to do to get it certified in Nevada. Note per page 4, all source code
for the game-specific parts of the machine must be submitted to the
gaming commission along with enough framework for the commission to
test it, and I'm told they actually examine it line by line (approval
takes about six months). There are also specifications for the
physical security of the machines.

After deployment, the audit department apparently does random spot
checks, going into casinos and pulling out machines, making sure that
the EPROM images actually running in them are the same as the images
that were approved. Four or five other states apparently do similar
examinations to certify equipment. The rest of the states then go
along with what the main five or six gambling states decide.

It's bizarre that voting machine vendors squawk so much about getting
their code audited, since they face the same issues as gambling machine
vendors do (the purpose of the code review must be partly to make sure
the machine isn't sneakily grabbing a few extra points of revenue), and
the gambling machine vendors seem to tolerate the requirement.

There are also some federal standards about code certification for
firmware running inside medical implants or in avionics. I'm trying to
find out more about that. Voting machine code seems to have no
standards at all.



From: Arno Schdfer <[log in to unmask]>
Subject: Modem hack

 > This is an old scam. A man uses a computer virus to change
 > Internet dial-up numbers of victims' computers to special
 > premium rates, in an attempt to make a pile of money. How he
 > thought that he wouldn't get caught is beyond me.

That remark is interesting. In Germany, these so-called "dialer"
programs are an enormous problem, so much so that the German parliament
recently passed a special law in order to contain the deluge of these
scams. Today, running a "dialer protection" tool is as essential to
German Internet users as having virus protection and a personal
firewall in place. Apparently, the danger of getting caught and
prosecuted is small compared to the financial incentive for these
people. One of the reasons for this is that it is often virtually
impossible to find out who is behind one of these "premium rate"
dial-up numbers. There is a whole industry of sellers and resellers
for German premium rate numbers, many of which are in other countries,
far from German jurisdiction. The fees for these calls (the most
blatant of which go up to $100 US per minute or up to $1000 US per
single call!) were collected with the regular phone bill. When someone
discovered they had accidentally "contracted" a dialer program, it
often was impossible to track down the culprits, or it was already too
late and they had disappeared. Moreover, you had to prove that you had
not deliberately activated the dialer program, as these were usually
declared as a "service" (e.g., in order to access adult content). So
in fact having someone actually prosecuted for this kind of fraud was
rather the exception than the rule. Luckily, the legal position for
victims of these scams has markedly improved by now in Germany.



From: John Viega <[log in to unmask]>
Subject: Amit Yoran

I was surprised in reading this month's Crypto-Gram to see you place
Amit Yoran in the doghouse for the following quote:

"For example, should we hold software vendors accountable for the
security of their code or for flaws in their code? In concept, that may
make sense. But in practice, do they have the capability, the tools to
produce more secure code?"

The only problem I personally see with this quote is that it doesn't
have enough context attached to it to make an absolute determination of
the intent. I do see how you could interpret it to mean, "It's
impossible to produce more secure code than we do today." However, just
from reading the quote, it seems that he's more saying that forcing
companies to accept liability isn't going to solve the problem, because
even with incentive to have perfectly secure software, companies will
be unable to deliver, due to the complexities of software development
and the lack of good tools and methodologies.

If that is the intent of Mr. Yoran's statement (which I'm sure it is,
as I'll get to in a moment), then he is dead-on. While there are
clearly easy things people can do that will help with the problem
(e.g., use any language other than C), the goal of building a system
without risk is more or less unachievable in practice. And the security
industry has done little to make it easy on developers, for whom
security can not ever be more than a part-time concern.

Not only have we, as an industry, not provided adequate tools to
support designing, building, and maintaining more secure systems, but
the "out of the box" security solutions we provide lend themselves to
misuse as well. For example, while Java is sometimes billed as a
"secure" language, I can tell you that we still tend to find one
significant security risk for every thousand lines of code (or so) in
Internet-enabled Java programs. Perhaps a better example is SSL/TLS,
where the libraries we provide developers encourage misuse. The mental
model developers need to use these libraries in a way that protects
against simple man-in-the-middle attacks is far more complex than the
model they tend to have (i.e., that SSL/TLS is a drop-in solution for
securing network traffic). As a result, the vast majority of
applications that deploy SSL/TLS get it wrong the first time, in a
major way.

Yes, there are software security problems that nobody should ever be
making, particularly the buffer overflow and its ilk. But, I'm sure
that you of all people should know how many things can go wrong in
networked applications (particularly when there are complex protocols
involved) and how obscure some of the faults in software systems can
be. For example, there was a recent problem in your own Password Safe
that showed up despite a defensive design.

Moreover, I'm sure you're aware that design and analysis techniques for
software security are still in their infancy. I'm currently working on
putting together a consortium to develop better design methodologies
that better integrate with existing software engineering practices,
because there is nothing effective out there yet. And, while static
analysis technologies such as model checking are decades old, we've
only been applying them to security problems for a few years now. And
such technologies are still fairly far away from the point that they'll
be fairly complete and integrate adequately with the workflow of
finicky developers.

Even in a world with great design and analysis technologies, we're
going to have a hard time educating developers on the world of risk
around them to the point that social engineering attacks become totally
impractical. It's not unreasonable to say that we're far away from the
point where it would make financial sense to make software vendors
liable for security mistakes.

I do know Amit Yoran personally, and I know him fairly well. He is
extremely intelligent and understands the software security problem and
the limits of current technology. He understands this problem so well
that, before he accepted the job of Cybersecurity Tzar, he took a very
active interest in the affairs of our startup and our analysis
technology. As a result, I can say quite definitively that Amit not
only understands the software security problem far better than most
people do, he believes that it is important for the security industry
to pioneer a trail toward liability by providing better technologies
and methodologies.

I do see how you could have misinterpreted Amit's stance from the
ambiguity in that one quote. I am surprised, though, that you would
come to such a snap judgment based on it alone. Beyond the fact that
you've undoubtedly been misquoted to your detriment on at least one
occasion, a bit of diligence on Amit certainly would have turned up the
fact that he's actually quite clued in on this subject, and is not in
the same class as the typical snake-oil you expose on a monthly basis.



From: Mary Ann Davidson <[log in to unmask]>
Subject: Amit Yoran

I am responding to a comment you made in the latest Cryptogram about a
quote Amit Yoran made (I have not read the original interview, so
please bear with me):

"'For example, should we hold software vendors accountable for the
security of their code or for flaws in their code?' Mr. Yoran asked in
an interview. 'In concept, that may make sense. But in practice, do
they have the capability, the tools to produce more secure code?'"

"The sheer idiocy of this quote amazes me. Does he really think that
writing more secure code is too hard for companies to manage? Does he
really think that companies are doing absolutely the best they possibly
can?"

I don't necessarily read that as indicating that it is too hard for
companies (with some caveats I will explain below) to write better
code. Referring to his comment about "the tools to produce more secure
code," I think it is true that the tools are lacking to help make it
easier to find nasty bugs in development.

This does not -- she said repeatedly -- excuse the overabundance of
avoidable, preventable security faults, but the lack of good code
scanning and QA tools does make it harder to do better, even if you
want to do better. I have seen development groups that "get" security,
have internalized it, who are all proud of themselves for checking
input conditions to prevent buffer overflows, but they only checked 20
out of 21 input conditions. One mistake still leads to a buffer
overflow, and is still really embarrassing and expensive to fix. If you
can automate more of these checks, it obviously will lead to better code.

Most of the code scanning/vulnerability assessment tools I see are
designed by consulting firms, which mean that they are generally not
designed to run against a huge code base every night, they don't work
on source code, they are not designed to be extensible, they have too
many false positives, and so on. Venture capitalists as a group are
often more interested in funding "Band-Aid" companies with outrageous
claims ("Our security product slices, dices, protects against all known
julienne fry attacks, and makes your teeth whiter, too!") than vaccine
companies ("scan code, find bugs, fix bugs before product ships so you
don't need Band-Aids, or need fewer of them"). You can make more money
on Band-Aids than on vaccines, which is probably one reason there are
so many snake-oil security products out there instead of a few really
good code scanning tools. Defense in depth is necessary, but we would
not need so much of it if we all made better products.

Clearly, corporate will to do a better job is a prerequisite, or nobody
would buy code-scanning tools, much less take the time to use them in
development. Most of the security issues in industry come down to
"crummy code," and writing less crummy code is a matter of culture and
tools to do the job. What amazes me is that almost every discussion
about this issue is prefaced with "...but we all know we can't build
perfect code." That does not mean we should stop trying, or that the
status quo is acceptable.

To answer your question (Does he really think that companies are doing
absolutely the best they possibly can?), I've met Amit and talked to
him a couple of times. No, he is not letting industry off the hook and
no, I don't believe he thinks industry is doing everything they can.
I've never read his comments that way, at any rate.


** *** ***** ******* *********** *************


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on security: computer and otherwise. Back
issues are available on <http://www.schneier.com/crypto-gram.html>.

To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send
a blank message to [log in to unmask] To
unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.

Comments on CRYPTO-GRAM should be sent to
[log in to unmask] Permission to print comments is assumed
unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who
will find it valuable. Permission is granted to reprint CRYPTO-GRAM,
as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of
the best sellers "Beyond Fear," "Secrets and Lies," and "Applied
Cryptography," and an inventor of the Blowfish and Twofish
algorithms. He is founder and CTO of Counterpane Internet Security
Inc., and is a member of the Advisory Board of the Electronic Privacy
Information Center (EPIC). He is a frequent writer and lecturer on
security topics. See <http://www.schneier.com>.

Counterpane Internet Security, Inc. is the world leader in Managed
Security Monitoring. Counterpane's expert security analysts protect
networks for Fortune 1000 companies world-wide. See
<http://www.counterpane.com>.

Copyright (c) 2004 by Bruce Schneier.

************************************************************************************
Distributed through Cyber-Society-Live [CSL]: CSL is a moderated discussion
list made up of people who are interested in the interdisciplinary academic
study of Cyber Society in all its manifestations.To join the list please visit:
http://www.jiscmail.ac.uk/lists/cyber-society-live.html
*************************************************************************************

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
June 2022
May 2022
March 2022
February 2022
October 2021
July 2021
June 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
July 2020
June 2020
May 2020
April 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager