JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE Archives

CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE  2005

CYBER-SOCIETY-LIVE 2005

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

[CSL]: CRYPTO-GRAM, December 15, 2005

From:

J Armitage <[log in to unmask]>

Reply-To:

Interdisciplinary academic study of Cyber Society <[log in to unmask]>

Date:

Fri, 16 Dec 2005 07:31:15 -0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (1948 lines)

 

-----Original Message-----
From: Bruce Schneier
To: [log in to unmask]
Sent: 15/12/2005 21:24
Subject: CRYPTO-GRAM, December 15, 2005

                  CRYPTO-GRAM

               December 15, 2005

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            [log in to unmask]
            <http://www.schneier.com>
           <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit 
<http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at 
<http://www.schneier.com/crypto-gram-0512.html>.  These same essays 
appear in the "Schneier on Security" blog: 
<http://www.schneier.com/blog>.  An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
      Airplane Security
      Australian Minister's Sensible Comments on Airline Security Spark 
Outcry
      Sky Marshal Shooting in Miami
      New Airplane Security Regulations
      Crypto-Gram Reprints
      Sony's DRM Rootkit: The Real Story
      CME in Practice
      OpenDocument Format and the State of Massachusetts
      News
      Surveillance and Oversight
      Truckers Watching the Highways
      Snake-Oil Research in the Magazine "Nature"
      Counterpane News
      Twofish Cryptanalysis Rumors
      Totally Secure Classical Communications?
      Comments from Readers


** *** ***** ******* *********** *************

      Airplane Security



Since 9/11, our nation has been obsessed with air-travel security. 
Terrorist attacks from the air have been the threat that looms largest 
in Americans' minds. As a result, we've wasted millions on misguided 
programs to separate the regular travelers from the suspected 
terrorists -- money that could have been spent to actually make us
safer.

Consider CAPPS and its replacement, Secure Flight. These are programs 
to check travelers against the 30,000 to 40,000 names on the 
government's No-Fly list, and another 30,000 to 40,000 on its Selectee 
list.

They're bizarre lists: people -- names and aliases -- who are too 
dangerous to be allowed to fly under any circumstance, yet so innocent 
that they cannot be arrested, even under the draconian provisions of 
the Patriot Act. The Selectee list contains an equal number of 
travelers who must be searched extensively before they're allowed to 
fly. Who are these people, anyway?

The truth is, nobody knows. The lists come from the Terrorist Screening 
Database, a hodgepodge compiled in haste from a variety of sources, 
with no clear rules about who should be on it or how to get off it. The 
government is trying to clean up the lists, but -- garbage in, garbage 
out -- it's not having much success.

The program has been a complete failure, resulting in exactly zero 
terrorists caught. And even worse, thousands (or more) have been denied 
the ability to fly, even though they've done nothing wrong. These 
denials fall into two categories: the "Ted Kennedy" problem (people who 
aren't on the list but share a name with someone who is) and the "Cat 
Stevens" problem (people on the list who shouldn't be). Even now, four 
years after 9/11, both these problems remain.

I know quite a lot about this. I was a member of the government's 
Secure Flight Working Group on Privacy and Security. We looked at the 
TSA's program for matching airplane passengers with the terrorist watch 
list, and found a complete mess: poorly defined goals, incoherent 
design criteria, no clear system architecture, inadequate testing. (Our 
report was on the TSA website, but has recently been removed -- 
"refreshed" is the word the organization used -- and replaced with an 
"executive summary" that contains none of the report's findings. The 
TSA did retain two rebuttals, which read like products of the same 
outline and dismiss our findings by saying that we didn't have access 
to the requisite information.) Our conclusions match those in two 
reports by the Government Accountability Office and one by the DHS 
inspector general.

Alongside Secure Flight, the TSA is testing Registered Traveler 
programs. There are two: one administered by the TSA, and the other a 
commercial program from Verified Identity Pass called Clear. The basic 
idea is that you submit your information in advance, and if you're OK 
-- whatever that means -- you get a card that lets you go through 
security faster.

Superficially, it all seems to make sense. Why waste precious time 
making Grandma Miriam from Brooklyn empty her purse when you can search 
Sharaf, a 26-year-old who arrived last month from Egypt and is 
traveling without luggage?

The reason is security. These programs are based on the dangerous myth 
that terrorists match a particular profile and that we can somehow pick 
terrorists out of a crowd if we only can identify everyone. That's 
simply not true.

What these programs do is create two different access paths into the 
airport: high-security and low-security. The intent is to let only good 
guys take the low-security path and to force bad guys to take the 
high-security path, but it rarely works out that way. You have to 
assume that the bad guys will find a way to exploit the low-security 
path. Why couldn't a terrorist just slip an altimeter-triggered 
explosive into the baggage of a registered traveler?

It may be counterintuitive, but we are all safer if enhanced screening 
is truly random, and not based on an error-filled database or a cursory 
background check.

The truth is, Registered Traveler programs are not about security; 
they're about convenience. The Clear program is a business: Those who 
can afford $80 per year can avoid long lines. It's also a program with 
a questionable revenue model. I fly 200,000 miles a year, which makes 
me a perfect candidate for this program. But my frequent-flier status 
already lets me use the airport's fast line and means that I never get 
selected for secondary screening, so I have no incentive to pay for a 
card. Maybe that's why the Clear pilot program in Orlando, Florida, 
only signed up 10,000 of that airport's 31 million annual passengers.

I think Verified Identity Pass understands this, and is encouraging use 
of its card everywhere: at sports arenas, power plants, even office 
buildings. This is just the sort of mission creep that moves us ever 
closer to a "show me your papers" society.

Exactly two things have made airline travel safer since 9/11: 
reinforcement of cockpit doors, and passengers who now know that they 
may have to fight back. Everything else -- Secure Flight and Trusted 
Traveler included -- is security theater.  We would all be a lot safer 
if, instead, we implemented enhanced baggage security -- both ensuring 
that a passenger's bags don't fly unless he does, and explosives 
screening for all baggage -- as well as background checks and increased 
screening for airport employees.

Then we could take all the money we save and apply it to intelligence, 
investigation and emergency response. These are security measures that 
pay dividends regardless of what the terrorists are planning next, 
whether it's the movie plot threat of the moment, or something entirely 
different.

This essay originally appeared in Wired:
<http://www.wired.com/news/privacy/0,1848,69712,00.html>
There are a lot of links in this essay.  You can see them on Wired's 
page. Or here:
<http://www.schneier.com/essay-096.html>


** *** ***** ******* *********** *************

      Australian Minister's Sensible Comments on Airline Security Spark 
Outcry



Three weeks ago, Immigration Minister Amanda Vanstone caused a stir by 
ridiculing airplane security in a public speech. She derided much of 
post-9/11 airline security, especially the use of plastic knives 
instead of metal ones, and said "a lot of what we do is to make people 
feel better as opposed to actually achieve an outcome."

As a foreigner, I know very little about Australian politics. I don't 
know anything about Senator Vanstone, her politics, her policies, or 
her party. I have no idea what she stands for. But as a security 
technologist, I agree 100% with her comments. Most airplane security is 
what I call "security theater": ineffective measures designed to make 
people feel better about flying.

I get irritated every time I get a plastic knife with my airplane meal. 
I know it doesn't make me any safer to get plastic. El Al, a company I 
know takes security seriously, serves in-flight meals with metal 
cutlery...even in economy class.

Senator Vanstone pointed to wine glasses and HB pencils as potential 
weapons. She could have gone further. Spend a few minutes on the 
problem, and you quickly realize that airplanes are awash in potential 
weapons: belts, dental floss, keys, neckties, hatpins, canes, or the 
bare hands of someone with the proper training. Snap the extension 
handle of a wheeled suitcase off in just the right way, and you've got 
a pretty effective spear. Garrotes can be made of fishing line or 
dental floss. Shatter a CD or DVD and you'll have a bunch of 
razor-sharp fragments. Break a bottle and you've got a nasty weapon. 
Even the most unimaginative terrorist could figure out how to smuggle 
an 8-inch resin combat knife onto a plane. In my book Beyond Fear, I 
even explained how to make a knife onboard with a tube of steel epoxy 
glue.

Maybe people who have watched MacGyver should never be allowed to fly.

The point is not that we can't make air travel safe; the point is that 
we're missing the point. Yes, the 9/11 terrorists used box cutters and 
small knives to hijack four airplanes, their attack wasn't about the 
weapons. The terrorists succeeded because they exploited a flaw in the 
US response policy. Prior to 9/11, standard procedure was to cooperate 
fully with the terrorists while the plane was in the air. The goal was 
to get the plane onto the ground, where you can more easily negotiate. 
That policy, of course, fails completely when faced with a suicide 
terrorists.

And more importantly, the attack was a one-time event. We haven't seen 
the end of airplane hijacking -- there was a conventional midair 
hijacking in Colombia in September -- but the aircraft-as-missile 
tactic required surprise to be successful.

This is not to say that we should give up on airplane security, either. 
A single cursory screening is worth it, but more extensive screening 
rapidly reaches the point of diminishing returns. Most criminals are 
stupid, and are caught by a basic screening system. And just as 
important, the very act of screening is both a reminder and a 
deterrent. Terrorists can't guarantee that they will be able to slip a 
weapon through screening, so they probably won't try.

But screening will never be perfect. We can't keep weapons out of 
prisons, a much more restrictive and controlled environment. How can we 
have a hope of keeping them off airplanes? The way to prevent airplane 
terrorism is not to spend additional resources keeping objects that 
could fall into the wrong hands off airplanes. The way to improve 
airplane security is to spend those resources keeping the wrong hands 
from boarding airplanes in the first place, and to make those hands 
ineffective if they do.

Exactly two things have made airline travel safer since 9/11: 
reinforcing the cockpit door, and passengers who now know that they may 
have to fight back. Everything else -- all that extra screening, those 
massive passenger profiling systems -- is security theater.

If, as Opposition leader Kim Beazley said, Senator Vanstone should be 
sacked for speaking the truth, then we're all much less secure. And if, 
as Federal Labor's homeland security spokesman Arch Bevis said, her 
comments made a mockery of the Howard government's credibility in the 
area of counter-terrorism, then maybe Howard's government doesn't have 
any credibility.

We would all be a lot safer if we took all the money we're spending on 
enhanced passenger screening and applied it to intelligence, 
investigation, and emergency response. This is how to keep the wrong 
hands off airplanes and, more importantly, how to make us secure 
regardless of what the terrorists are planning next -- even if it has 
nothing to do with airplanes.

This essay originally appeared in the Sydney Morning Herald:
<http://www.smh.com.au/news/soapbox/airplane-security-and-metal-knives/2

005/11/30/1133026503111.html> or <http://tinyurl.com/dupav>

My original blog entry on the topic:
<http://www.schneier.com/blog/archives/2005/11/australian_mini.html>


** *** ***** ******* *********** *************

      Sky Marshal Shooting in Miami



I don't have a lot to say about the Miami false-alarm terrorist 
incident.  For those of you who have spent the last few days in an 
isolation chamber, sky marshals shot and killed a mentally ill man they 
believed to be a terrorist.  The shooting happened on the ground, in 
the Jetway.  The man claimed he had a bomb and wouldn't stop when 
ordered to by sky marshals.  At least, that's the story.

I've read the reports, the claims of the sky marshals and the 
counterclaims of some witnesses.  Whatever happened -- and it's 
possible that we'll never know -- it does seem that this incident isn't 
the same as the British shooting of a Brazilian man on July 22.

I do want to make two points, though.

One, any time you have an officer making split-second life and death 
decisions, you're going to have mistakes.  I hesitate to second-guess 
the sky marshals on the ground; they were in a very difficult 
position.  But the way to minimize mistakes is through training.  I 
strongly recommend that anyone interested in this sort of thing read 
Blink, by Malcolm Gladwell.

Two, I'm not convinced the sky marshals' threat model matches 
reality.  Mentally ill people are far more common than 
terrorists.  People who claim to have a bomb and don't are far more 
common than people who actually do.  The real question we should be 
asking here is: what should the appropriate response be to this 
low-probability threat?

Blink:
<http://www.amazon.com/gp/product/0316172324/qid=1134149126>

Good Salon article on the topic.
<http://www.salon.com/tech/col/smith/2005/12/09/askthepilot165/index.htm

l> or <http://tinyurl.com/7497z>


** *** ***** ******* *********** *************

      New Airplane Security Regulations



The TSA is relaxing the rules for bringing pointy things on aircraft.

I like some of the randomness they're introducing.  I don't know if 
they will still make people take laptops out of their cases, make 
people take off their shoes, or confiscate pocket knives.  (Different 
articles have said different things about the last one.)

This is a good change, and it's long overdue.  Airplane terrorism 
hasn't been the movie-plot threat everyone worries about for a while.

The most amazing reaction to this is from Corey Caldwell, spokeswoman 
for the Association of Flight Attendants: "When weapons are allowed 
back on board an aircraft, the pilots will be able to land the plane 
safety but the aisles will be running with blood."

How's that for hyperbole?

In my book Beyond Fear and elsewhere, I've written about the notion of 
"agenda" and how it informs security trade-offs.  From the perspective 
of the flight attendants, subjecting passengers to onerous screening 
requirements is a perfectly reasonable trade-off.  They're safer -- 
albeit only slightly -- because of it, and it doesn't cost them 
anything.  The cost is an externality to them: the passengers pay 
it.  Passengers have a broader agenda: safety, but also cost, 
convenience, time, etc.  So it makes perfect sense that the flight 
attendants object to a security change that the passengers are in favor
of.

<http://www.azcentral.com/news/articles/1201terror01.html>

Movie plot threats:
<http://www.schneier.com/essay-087.html>

Caldwell quote:
<http://www.washingtonpost.com/wp-dyn/content/article/2005/11/29/AR20051

12901614.html> or <http://tinyurl.com/al72d>


** *** ***** ******* *********** *************

      Crypto-Gram Reprints



Crypto-Gram is currently in its eighth year of publication.  Back 
issues cover a variety of security-related topics, and can all be found 
on <http://www.schneier.com/crypto-gram.html>.  These are a selection 
of articles that appeared in this calendar month in other years.

Behavioral Assessment Profiling:
<http://www.schneier.com/crypto-gram-0412.html#1>

Kafka and the Digital Person:
<http://www.schneier.com/crypto-gram-0412.html#8>

Safe Personal Computing:
<http://www.schneier.com/crypto-gram-0412.html#10>

Blaster and the August 14th Blackout:
<http://www.schneier.com/crypto-gram-0312.html#1>

Quantum Cryptography:
<http://www.schneier.com/crypto-gram-0312.html#6>

Computerized and Electronic Voting:
<http://www.schneier.com/crypto-gram-0312.html#9>

Counterattack:
<http://www.schneier.com./crypto-gram-0212.html#1>

Comments on the Department of Homeland Security:
<http://www.schneier.com./crypto-gram-0212.html#3>

Crime: The Internet's Next Big Thing:
<http://www.schneier.com./crypto-gram-0212.html#7>

National ID Cards:
<http://www.schneier.com/crypto-gram-0112.html#1>

Judges Punish Bad Security:
<http://www.schneier.com/crypto-gram-0112.html#2>

Computer Security and Liabilities:
<http://www.schneier.com/crypto-gram-0112.html#4>

Fun with Vulnerability Scanners:
<http://www.schneier.com/crypto-gram-0112.html#9>

Voting and Technology:
<http://www.schneier.com/crypto-gram-0012.html#1>

"Security Is Not a Product; It's a Process"
<http://www.schneier.com/crypto-gram-9912.html#1>

Echelon Technology:
<http://www.schneier.com/crypto-gram-9912.html#3>

European Digital Cellular Algorithms:
<http://www.schneier.com/crypto-gram-9912.html#10>

The Fallacy of Cracking Contests:
<http://www.schneier.com/crypto-gram-9812.html#contests>

How to Recognize Plaintext:
<http://www.schneier.com/crypto-gram-9812.html#plaintext>

** *** ***** ******* *********** *************

      Sony's DRM Rootkit: The Real Story



It's a David and Goliath story of the tech blogs defeating a 
mega-corporation.

On Oct. 31, Mark Russinovich broke the story in his blog: Sony BMG 
Music Entertainment distributed a copy-protection scheme with music CDs 
that secretly installed a rootkit on computers. This software tool is 
run without your knowledge or consent -- if it's loaded on your 
computer with a CD, a hacker can gain and maintain access to your 
system and you wouldn't know it.

The Sony code modifies Windows so you can't tell it's there, a process 
called "cloaking" in the hacker world. It acts as spyware, 
surreptitiously sending information about you to Sony. And it can't be 
removed; trying to get rid of it damages Windows.

This story was picked up by other blogs (including mine), followed by 
the computer press. Finally, the mainstream media took it up.

The outcry was so great that on Nov. 11, Sony announced it was 
temporarily halting production of that copy-protection scheme. That 
still wasn't enough -- on Nov. 14 the company announced it was pulling 
copy-protected CDs from store shelves and offered to replace customers' 
infected CDs for free.

But that's not the real story here.

It's a tale of extreme hubris. Sony rolled out this incredibly invasive 
copy-protection scheme without ever publicly discussing its details, 
confident that its profits were worth modifying its customers' 
computers. When its actions were first discovered, Sony offered a "fix" 
that didn't remove the rootkit, just the cloaking.

Sony claimed the rootkit didn't phone home when it did. On Nov. 4, 
Thomas Hesse, Sony BMG's president of global digital business, 
demonstrated the company's disdain for its customers when he said, 
"Most people don't even know what a rootkit is, so why should they care 
about it?" in an NPR interview. Even Sony's apology only admits that 
its rootkit "includes a feature that may make a user's computer 
susceptible to a virus written specifically to target the software."

However, imperious corporate behavior is not the real story either.

This drama is also about incompetence. Sony's latest rootkit-removal 
tool actually leaves a gaping vulnerability. And Sony's rootkit -- 
designed to stop copyright infringement -- itself may have infringed on 
copyright. As amazing as it might seem, the code seems to include an 
open-source MP3 encoder in violation of that library's license 
agreement. But even that is not the real story.

It's an epic of class-action lawsuits in California and elsewhere, and 
the focus of criminal investigations. The rootkit has even been found 
on computers run by the Department of Defense, to the Department of 
Homeland Security's displeasure. While Sony could be prosecuted under 
U.S. cybercrime law, no one thinks it will be. And lawsuits are never 
the whole story.

This saga is full of weird twists. Some pointed out how this sort of 
software would degrade the reliability of Windows. Someone created 
malicious code that used the rootkit to hide itself. A hacker used the 
rootkit to avoid the spyware of a popular game. And there were even 
calls for a worldwide Sony boycott. After all, if you can't trust Sony 
not to infect your computer when you buy its music CDs, can you trust 
it to sell you an uninfected computer in the first place? That's a good 
question, but -- again -- not the real story.

It's yet another situation where Macintosh users can watch, amused 
(well, mostly) from the sidelines, wondering why anyone still uses 
Microsoft Windows. But certainly, even that is not the real story.

The story to pay attention to here is the collusion between big media 
companies who try to control what we do on our computers and 
computer-security companies who are supposed to be protecting us.

Initial estimates are that more than half a million computers worldwide 
are infected with this Sony rootkit. Those are amazing infection 
numbers, making this one of the most serious internet epidemics of all 
time -- on a par with worms like Blaster, Slammer, Code Red and Nimda.

What do you think of your antivirus company, the one that didn't notice 
Sony's rootkit as it infected half a million computers? And this isn't 
one of those lightning-fast internet worms; this one has been spreading 
since mid-2004. Because it spread through infected CDs, not through 
internet connections, they didn't notice? This is exactly the kind of 
thing we're paying those companies to detect -- especially because the 
rootkit was phoning home.

But much worse than not detecting it before Russinovich's discovery was 
the deafening silence that followed. When a new piece of malware is 
found, security companies fall over themselves to clean our computers 
and inoculate our networks. Not in this case.

McAfee didn't add detection code until Nov. 9, and as of Nov. 15 it 
doesn't remove the rootkit, only the cloaking device. The company 
admits on its web page that this is a lousy compromise. "McAfee 
detects, removes and prevents reinstallation of XCP." That's the 
cloaking code. "Please note that removal will not impair the 
copyright-protection mechanisms installed from the CD. There have been 
reports of system crashes possibly resulting from uninstalling XCP." 
Thanks for the warning.

Symantec's response to the rootkit has, to put it kindly, evolved. At 
first the company didn't consider XCP malware at all. It wasn't until 
Nov. 11 that Symantec posted a tool to remove the cloaking. As of Nov. 
15, it is still wishy-washy about it, explaining that "this rootkit was 
designed to hide a legitimate application, but it can be used to hide 
other objects, including malicious software."

The only thing that makes this rootkit legitimate is that a 
multinational corporation put it on your computer, not a criminal 
organization.

You might expect Microsoft to be the first company to condemn this 
rootkit. After all, XCP corrupts Windows' internals in a pretty nasty 
way. It's the sort of behavior that could easily lead to system crashes 
-- crashes that customers would blame on Microsoft. But it wasn't until 
Nov. 13, when public pressure was just too great to ignore, that 
Microsoft announced it would update its security tools to detect and 
remove the cloaking portion of the rootkit.

Perhaps the only security company that deserves praise is F-Secure, the 
first and the loudest critic of Sony's actions. And Sysinternals, of 
course, which hosts Russinovich's blog and brought this to light.

Bad security happens. It always has and it always will. And companies 
do stupid things; always have and always will. But the reason we buy 
security products from Symantec, McAfee and others is to protect us 
from bad security.

I truly believed that even in the biggest and most-corporate security 
company there are people with hackerish instincts, people who will do 
the right thing and blow the whistle. That all the big security 
companies, with over a year's lead time, would fail to notice or do 
anything about this Sony rootkit demonstrates incompetence at best, and 
lousy ethics at worst.

Microsoft I can understand. The company is a fan of invasive copy 
protection -- it's being built into the next version of Windows. 
Microsoft is trying to work with media companies like Sony, hoping 
Windows becomes the media-distribution channel of choice. And Microsoft 
is known for watching out for its business interests at the expense of 
those of its customers.

What happens when the creators of malware collude with the very 
companies we hire to protect us from that malware?

We users lose, that's what happens. A dangerous and damaging rootkit 
gets introduced into the wild, and half a million computers get 
infected before anyone does anything.

Who are the security companies really working for? It's unlikely that 
this Sony rootkit is the only example of a media company using this 
technology. Which security company has engineers looking for the others 
who might be doing it? And what will they do if they find one?  What 
will they do the next time some multinational company decides that 
owning your computers is a good idea?

These questions are the real story, and we all deserve answers.

This essay originally appeared in Wired:
<http://www.wired.com/news/privacy/0,1848,69601,00.html>
There are a lot of links in this essay.  You can see them on Wired's 
page. Or here:
<http://www.schneier.com/essay-094.html>

These are my other blog posts on this:
<http://www.schneier.com/blog/archives/2005/11/sony_secretly_i_1.html>
<http://www.schneier.com/blog/archives/2005/11/more_on_sonys_d.html>
<http://www.schneier.com/blog/archives/2005/11/still_more_on_s_1.html>
<http://www.schneier.com/blog/archives/2005/11/the_sony_rootki.html>
There are lots of other links in these posts.


** *** ***** ******* *********** *************

      CME in Practice



CME is "Common Malware Enumeration," and it's an initiative by US-CERT 
to give all worms, viruses, and such uniform names.  The problem is 
that different security vendors use different names for the same thing, 
and it can be extremely confusing for customers.  A uniform naming 
system is a great idea.

Here's someone talking about how it's not working very well in 
practice.  Basically, while you can go from a vendor's site to the CME 
information, you can't go from the CME information to a vendor's 
site.  This essentially makes it worthless: just another name and 
number without references.

<http://isc.sans.org/diary.php?storyid=895>

CME:
<http://cme.mitre.org/>

My original post on the topic;
<http://www.schneier.com/blog/archives/2005/09/computer_malwar.html>


** *** ***** ******* *********** *************

      OpenDocument Format and the State of Massachusetts



OpenDocument format (ODF) is an alternative to the Microsoft document, 
spreadsheet, and etc. file formats.

No big deal.  Except that Microsoft, with its proprietary Office 
document format, is spreading rumors that ODF is somehow less secure.

This, from the company that allows Office documents to embed arbitrary 
Visual Basic programs?

Yes, there is a way to embed scripts in ODF; this seems to be what 
Microsoft is pointing to.  But at least ODF has a clean and open XML 
format, which allows layered security and the ability to remove scripts 
as needed.  This is much more difficult in the binary Microsoft formats 
that effectively hide embedded programs.

Microsoft's claim that the open ODF is inherently less secure than the 
proprietary Office format is essentially an argument for security 
through obscurity.  ODF is no less secure than current .doc and other 
proprietary formats, and may be -- marginally, at least -- more secure.

The ODF people say it nicely: "There is no greater security risk, no 
greater ability to 'manipulate code' or gain access to content using 
ODF than alternative document formats. Security should be addressed 
through policy decisions on information sharing, regardless of document 
format. Security exposures caused by programmatic extensions such as 
the visual basic macros that can be imbedded in Microsoft Office 
documents are well known and notorious, but there is nothing distinct 
about ODF that makes it any more or less vulnerable to security risks 
than any other format specification. The many engineers working to 
enhance the ODF specification are working to develop techniques to 
mitigate any exposure that may exist through these extensions."

This whole thing has heated up because Massachusetts recently required 
public records be held in OpenDocument format, which has put Microsoft 
into a bit of a tizzy.  I don't know if it's why Microsoft is 
submitting its Office Document Formats to ECMA for "open 
standardization," but I'm sure it's part of the reason.

ODF:
<http://en.wikipedia.org/wiki/OpenDocument>
<http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=office>

ODF on security:
<http://www-128.ibm.com/developerworks/blogs/dw_blog_comments.jspa?blog=

384&entry=98231> or <http://tinyurl.com/dwjgt>

Massachusetts decision:
<http://news.com.com/Massachusetts+moves+ahead+sans+Microsoft/2100-1012_

3-5878869.html> or <http://tinyurl.com/74rgo>
<http://news.com.com/Massachusetts+assaults+monoculture/2010-7344_3-5968

740.html> or <http://tinyurl.com/cpfz7>
<http://riskman.typepad.com/perilocity/2005/11/mass_opens_doc.html>

Microsoft's actions:
<http://www.microsoft.com/presspass/press/2005/nov05/11-21EcmaPR.mspx>


** *** ***** ******* *********** *************

      News



Here's a cell phone that can detect if it is stolen by measuring the 
gait of the person carrying it.  Clever, as long as you realize that 
there are going to be a lot of false alarms.  This seems okay: "If the 
phone suspects it has fallen into the wrong hands, it will prompt the 
user for a password if they attempt to make calls or access its memory."
<http://www.newscientist.com/article.ns?id=dn8161>

For a long time now, I've been saying that the rate of identity theft 
has been grossly overestimated: too many things are counted as identity 
theft that are just traditional fraud.  Here's some interesting data to 
back that claim up:
<http://www.schneier.com/blog/archives/2005/11/identity_theft.html>
Identity theft is a serious crime, and it's a major growth industry in 
the criminal world.  But we do everyone a disservice when we count 
things as identity theft that really aren't.

More evidence that hackers are migrating into crime:
<http://www.biosmagazine.co.uk/op.php?id=314>

A Canadian reporter was able to get phone records for the personal and 
professional accounts held by Canadian Privacy Commissioner Jennifer 
Stoddart through an American data broker, locatecell.com.  The security 
concerns are obvious.
<http://www.macleans.ca/topstories/canada/article.jsp?content=20051121_1

15779_115779> or <http://tinyurl.com/bgf69>
Canada has an exception in the privacy laws that allows newspapers to 
do this type of investigative reporting. My guess is that's the only 
reason we haven't seen an American reporter pull phone records on one 
of our government officials.

Western Union has been the conduit of a lot of fraud.  But since 
they're not the victim, they don't care much about security.  It's an 
externality to them.  It took a lawsuit to convince them to take 
security seriously.
<http://seattlepi.nwsource.com/local/248537_scams16.html>

Ex-MI5 chief calls ID cards "useless."  Refreshing candor.
<http://news.bbc.co.uk/1/hi/uk_politics/4444512.stm>

An Iowa prison break illustrates an important security 
principle:  Guards = dynamic security.  Tripwires = static 
security.  Dynamic security is better than static security.
<http://www.schneier.com/blog/archives/2005/11/prisons_and_gua.html>

Coming soon to an airport near you -- automatic lie detectors:
<http://news.yahoo.com/s/nm/20051117/tc_nm/security_liedetector_dc>
In general, I prefer security systems that are invasive yet anonymous 
to ones that are based on massive databases.  And automatic systems 
that divide people into a "probably fine" and "investigate a bit more" 
categories seem like a good use of technology.  I have no idea whether 
this system works (there is a lot of evidence that it does not), what 
the false positive and false negative rates are (one article states a 
completely useless 12% false positive rate), or how easy it would be to 
learn how to fool the system, though.  And in all of these trade-off 
discussions, the devil is in the details.
<http://www.trancewave.com/novalounge/blog/2005/11/something-for-that-gr

owing-pile-of-bad.html> or <http://tinyurl.com/a6m6y>
<http://news.com.com/Lie+detectors+may+be+next+step+in+airline+security/

2100-1008_3-5958656.html> or <http://tinyurl.com/9wcf9>

I regularly get anonymous e-mail from people exposing software 
vulnerabilities.  This one, about a possible Net Objects Fusion 9 
vulnerability looks interesting:
<http://www.schneier.com/blog/archives/2005/11/possible_net_ob.html>

This is an amazing story: Doris Payne, a 75-year-old jewel thief.
<http://www.msnbc.msn.com/id/10072306/>

Another movie plot threat: electronic pulses from space:
<http://www.washtimes.com/national/20051121-103434-8775r.htm>
I love this quote: "This is the single most serious national-security 
challenge and certainly the least known."  The "single most serious 
national-security challenge."  Absolutely nothing more serious.  Sheesh.

Do you own shares of a Janus mutual fund?  Can you vote your shares 
through a website called vote.proxy-direct.com?  If so, you can vote 
the shares of others.  If you have a valid proxy number, you can add 
1300 to the number to get another valid proxy number.  Once entered, 
you get another person's name, address, and account number at 
Janus!  You could then vote their shares too.  It's easy.  Probably 
illegal.  Definitely a great resource for identity thieves.  Certainly 
pathetic.
<http://www.schneier.com/blog/archives/2005/11/vote_someone_el.html>

Chris Hoofnagle is the West Coast Director for EPIC.  This is his 
consumer privacy top 10:
<http://west.epic.org/archives/2005/11/hoofnagles_cons.html>

The European music industry is lobbying the European Parliament, 
demanding things that the RIAA can only dream about.  They want 
anti-terror laws to apply to music downloaders, too.
<http://www.guardian.co.uk/arts/netmusic/story/0,13368,1651273,00.html> 
or <http://tinyurl.com/d35ks>
Our society definitely needs a serious conversation about the 
fundamental freedoms we are sacrificing in a misguided attempt to keep 
us safe from terrorism.  It feels both surreal and sickening to have to 
defend our fundamental freedoms against those who want to stop people 
from sharing music.  How is it possible that we can contemplate so much 
damage to our society simply to protect the business model of a handful 
of companies?

Safecracking with thermal imaging:
<http://lcamtuf.coredump.cx/tsafe/>

Are we giving the U.S. Military the power to conduct domestic
surveillance:
<http://www.washingtonpost.com/wp-dyn/content/article/2005/11/26/AR20051

12600857_pf.html> or <http://tinyurl.com/buqyy>
The police and the military have fundamentally different missions.  The 
police protect citizens.  The military attacks the enemy.  When you 
start giving police powers to the military, citizens start looking like 
the enemy.  We gain a lot of security because we separate the functions 
of the police and the military, and we will all be much less safer if 
we allow those functions to blur.  This kind of thing worries me far 
more than terrorist threats.

Want to make the country safer from terrorism?  Take the money now 
being wasted on national ID cards, massive data mining projects, 
fingerprinting foreigners, airline passenger profiling, etc., and use 
it to fund worldwide efforts to interdict terrorist funding:
<http://www.mezomorf.com/washington/news-14007.html>

This has got to be the most bizarre movie-plot threat to date: alien 
viruses downloaded via the SETI project:
<http://technology.guardian.co.uk/news/story/0,16559,1650296,00.html>
Here's his website:
<http://home.fnal.gov/~carrigan/SETI/SETI_Hacker.htm>

What is it with this month?  I can't turn around without seeing another 
dumb movie-plot threat.  Here, a Thai minister is warning people not to 
return unknown calls on their cell phone, because they might be used to 
detonate terrorist bombs.
<http://sg.news.yahoo.com/051129/1/3wwa8.html>

Miami police to stage "random shows of force":
<http://www.breitbart.com/news/2005/11/28/D8E5RPBO5.html>
More info:
<http://www.schneier.com/blog/archives/2005/11/miami_police_st.html>
<http://www.discourse.net/archives/2005/11/miami_wont_be_checking_id.htm

l> or <http://tinyurl.com/75c9c>
<http://www.discourse.net/archives/2005/11/miami_wont_be_checking_id_upd

ated.html> or <http://tinyurl.com/b4m8x>

Counterfeiting is big business in Colombia: "Colombia is thought to 
produce more than 40 percent of fake money circulating around the
world."
<http://edition.cnn.com/2005/WORLD/americas/11/20/colombia.counterfeit.a

p> or <http://tinyurl.com/cmo9a>

This sentence jumped out at me in an otherwise pedestrian article on 
criminal fraud: "Fraud is fundamentally fuelling the growth of 
organised crime in the UK, earning more from fraud than they do from 
drugs."
<http://news.bbc.co.uk/2/hi/business/4463132.stm>
I'll bet that most of that involves the Internet to some degree.
And then there's this: "Global cybercrime turned over more money than 
drug trafficking last year, according to a US Treasury advisor. Valerie 
McNiven, an advisor to the US government on cybercrime, claimed that 
corporate espionage, child pornography, stock manipulation, phishing 
fraud and copyright offences cause more financial harm than the trade 
in illegal narcotics such as heroin and cocaine."
<http://www.channelregister.co.uk/2005/11/29/cybercrime>
This doesn't bode well for computer security in general.

Open-source intelligence: a good idea.
<http://www.politicalgateway.com/news/read.html?id=5315>

This is absolutely fascinating research by Matt Blaze on evading 
telephone wiretapping systems.
<http://www.crypto.com/papers/wiretapping/>
<http://www.mezomorf.com/national/news-14087.html>

Daniel Solove on Google and privacy:
<http://www.concurringopinions.com/archives/2005/11/the_google_empi.html

 > or <http://tinyurl.com/arpym>
Here's a Boston Globe op-ed on the same topic:
<http://www.boston.com/news/globe/editorial_opinion/oped/articles/2005/1

2/03/google_search_and_seizure> or <http://tinyurl.com/856h9>

New phishing trick:
<http://abcnews.go.com/Technology/PCWorld/story?id=1351041>
I like the advice:  "To distinguish an imposter from the genuine 
article, you should carefully scan the security certificate prompt for 
a reference to either 'a self-issued certificate' or 'an unknown 
certificate authority.'"  Yeah, like anyone is going to do that.

A funny -- and all too true -- addition to the SANS Top 20: Humans.
<http://rwanner.blogspot.com/2005/11/human-side-of-security.html>

Why are limitations on police power a partisan political issue?
<http://www.schneier.com/blog/archives/2005/12/limitations_on_1.html>

Does the FBI get to approve all software?  It sounds implausible, but 
how else do you explain this FCC ruling:
<http://news.com.com/2061-10804_3-5884130.html>

Interesting GAO report on electronic voting.
<http://www.gao.gov/new.items/d05956.pdf>
Read the "Results in Brief" section, at least.
<http://www.schneier.com/blog/archives/2005/12/gao_report_on_e.html>

The Onion on Security
"CIA Realizes It's Been Using Black Highlighters All These Years":
<http://www.theonion.com/content/node/43014>
"Terrorist Has No Idea What To Do With All This Plutonium":
<http://www.theonion.com/content/node/43012>
"RIAA Bans Telling Friends About Songs":
<http://www.theonion.com/content/node/43029>

Pity this story about armed killer dolphins is fake:
<http://observer.guardian.co.uk/international/story/0,6903,1577753,00.ht

ml> or <http://tinyurl.com/89xhk>
<http://www.snopes.com/katrina/rumor/dolphins.asp>

This is a bit technical, but it's a good window into the hacker 
mentality.  This guy walks step by step through the process of figuring 
out how to exploit a Cisco vulnerability.
<http://www.infohacking.com/INFOHACKING_RESEARCH/Our_Advisories/cisco/in

dex.html> or <http://tinyurl.com/adrcn>

Yet another story about benevolent worms and how they can secure our 
networks.
<http://www.newscientist.com/article.ns?id=dn8403>
This idea shows up every few years.  (I wrote about it in 2000, and 
again in 2003.)
My comments this time:
<http://www.schneier.com/blog/archives/2005/12/benevolent_worm.html>

I've already written about merchants using classical music to 
discourage loitering.  Young people don't like the music, so they don't 
stick around.  Here's a new twist: high-frequency noise that children 
and teenagers can hear but adults can't:
<http://www.theage.com.au/news/world/rowdies-buzz-off-as-the-mosquito-bi

tes/2005/11/29/1133026467657.html> or <http://tinyurl.com/bww28>
Classical music as a deterrent:
<http://www.schneier.com/blog/archives/2005/08/low-tech_loiter.html>

This is a really interesting article from Wired on emergency 
information services.  I like the talk about the inherent strength of 
agile communications systems and its usefulness in disseminating 
emergency information.  Also the bottom-up approach to information.
<http://www.wired.com/wired/archive/13.12/warning.html>

30,000 people mistakenly put on terrorist watch list:
<http://news.zdnet.com/2100-1009_22-5984673.html>
When are we finally going to admit that the DHS is incompetent at 
this?  At least they weren't kidnapped and imprisoned for five months, 
and "shackled, beaten, photographed nude and injected with drugs by 
interrogators."
<http://www.washingtonpost.com/wp-dyn/content/article/2005/12/03/AR20051

20301476.html> or <http://tinyurl.com/co2js>
<http://www.smh.com.au/news/Global-Terrorism/Innocent-German-beaten-by-U

S-jailers/2005/04/24/1114281451199.html> or <http://tinyurl.com/9kgtb>

In September, the Inspector General of the Department of Homeland 
Security published a report on the security of the USCIS (United States 
Citizenship and Immigration Services) databases.  It's called: 
"Security Weaknesses Increase Risks to Critical United States 
Citizenship and Immigration Services Database," and a redacted version 
is on the DHS website.
<http://www.dhs.gov/interweb/assetlibrary/OIGr_05-42_Sep05.pdf>
A piece of the Executive Summary:
<http://www.schneier.com/blog/archives/2005/12/us_immigration.html>

The article is a bit inane, but it talks about an interesting security 
problem.  "E-hijacking" is the term used to describe the theft of goods 
in transit by altering the electronic paperwork.
<http://fleetowner.com/news/topstory/hijack_electronic_data_truck_ehijac

k_security_110305/> or <http://tinyurl.com/df5vd>
More and more, the physical movement of goods is secondary to the 
electronic movement of information.  Oil being shipped across the 
Atlantic, for example, can change hands several times while it is in 
transit.  I see a whole lot of new risks along these lines in the
future.

Dan Geer on monocultures and operating systems.
<http://www.usenix.org/publications/login/2005-12/openpdfs/geer.pdf>

I remember reading this fictional terrorism story by G. Gordon Liddy 
when it first appeared in Omni in 1989.  I wouldn't say he "predicted 
attack on America," but he did produce an entertaining piece of fiction.
<http://www.liddyshow.us/mustread11.php>

For a while I've been saying that most stolen identities are never 
used.  It's nice to see some independent confirmation:
<http://www.schneier.com/blog/archives/2005/12/most_stolen_ide.html>

FBI says that cyberterrorism is unlikely.  A surprising outbreak of
reason.
<http://www.cnn.com/2005/TECH/internet/12/08/cyber.attack.fbi.reut/index

.html> or <http://tinyurl.com/drt7y>
<http://www.theage.com.au/news/breaking/fbi-rules-out-cyberattacks/2005/

12/08/1133829693386.html> or <http://tinyurl.com/8fdkl>
Here's a debate on the topic:
<http://www.watchguard.com/RSS/showarticle.aspx?pack=RSS.RTcyberterr>
And here are my comments from 2003:
<http://www.schneier.com/crypto-gram-0306.html#1>

Good paper by Brian Snow of the NSA on security and assurance.
<http://www.acsa-admin.org/2005/papers/Snow.pdf>

There seems to be a well-organized Chinese military hacking effort 
against the U.S. military. The U.S. code name for the effort is "Titan 
Rain." The news reports are spotty, and more than a little 
sensationalist, but I know people involved in this investigation -- the 
attackers are very well-organized.
<http://www.terra.net.lb/wp/Articles/DesktopArticle.aspx?ArticleID=26095

5&ChannelId=16> or <http://tinyurl.com/8rozx>
<http://news.zdnet.com/2100-1009_22-5969516.html>
<http://www.time.com/time/magazine/article/0,9171,1098961-1,00.html>

Korea solves the identity theft problem: they make banks liable.
<http://www.finextra.com/fullstory.asp?id=14634>
Of course, by itself this action doesn't solve identity theft. But in a 
vibrant capitalist economic market, this action is going to pave the 
way for technical security improvements that will effectively deal with 
identity theft.  The good news for the rest of us is that we can watch 
what happens now.

Funny airline security story:
<http://www.schneier.com/blog/archives/2005/12/weakest_link_se.html>
Remember, security is only as strong as the weakest link.

Leon County, FL dumps Diebold voting machines after they learn how easy 
it is hack the vote:
<http://www.bbvforums.org/cgi-bin/forums/board-auth.cgi?file=/1954/15595

.html>

Interesting research about whether port scans are precursors to attacks:
<http://www.techworld.com/security/news/index.cfm?NewsID=4991>


** *** ***** ******* *********** *************

      Surveillance and Oversight



Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on 
New Year's Eve. In the absence of any real evidence, the FBI tried to 
compile a real-time database of everyone who was visiting the city. It 
collected customer data from airlines, hotels, casinos, rental car 
companies, even storage locker rental companies. All this information 
went into a massive database -- probably close to a million people 
overall -- that the FBI's computers analyzed, looking for links to 
known terrorists. Of course, no terrorist attack occurred and no plot 
was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be 
surprised to learn that the FBI collected his personal data, but this 
kind of thing is increasingly common. Since 9/11, the FBI has been 
collecting all sorts of personal information on ordinary Americans, and 
it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups 
of Americans. Both were created in the 1970s to gather information 
solely on foreign terrorists and spies. Both were greatly expanded by 
the USA Patriot Act and other laws, and are now routinely used against 
ordinary, law-abiding Americans who have no connection to terrorism. 
Together, they represent an enormous increase in police power in the 
United States.

The first are FISA warrants (sometimes called Section 215 warrants, 
after the section of the Patriot Act that expanded their scope). These 
are issued in secret, by a secret court. The second are national 
security letters, less well known but much more powerful, and which FBI 
field supervisors can issue all by themselves. The exact numbers are 
secret, but a recent Washington Post article estimated that 30,000 
letters each year demand telephone records, banking data, customer 
data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law 
from disclosing the fact that they received them. And two years ago, 
Attorney General John Ashcroft rescinded a 1995 guideline that this 
information be destroyed if it is not relevant to whatever 
investigation it was collected for. Now, it can be saved indefinitely, 
and disseminated freely.

September 2005, Rotterdam. The police had already identified some of 
the 250 suspects in a soccer riot from the previous April, but most 
were unidentified but captured on video. In an effort to help, they 
sent text messages to 17,000 phones known to be in the vicinity of the 
riots, asking that anyone with information contact the police. The 
result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are 
instructive. The Rotterdam police needed specific data for a specific 
purpose. Its members worked with federal justice officials to ensure 
that they complied with the country's strict privacy laws. They 
obtained the phone numbers without any names attached, and deleted them 
immediately after sending the single text message. And their actions 
were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague 
hinting that a Las Vegas attack might occur, the bureau vacuumed up an 
enormous amount of information. First its members tried asking for the 
data; then they turned to national security letters and, in some cases, 
subpoenas. There was no requirement to delete the data, and there is 
every reason to believe that the FBI still has it all. And the bureau 
worked in secret; the only reason we know this happened is that the 
operation leaked.

These differences illustrate four principles that should guide our use 
of personal information by the police. The first is oversight: In order 
to obtain personal information, the police should be required to show 
probable cause, and convince a judge to issue a warrant for the 
specific information needed. Second, minimization: The police should 
only get the specific information they need, and not any more. Nor 
should they be allowed to collect large blocks of information in order 
to go on "fishing expeditions," looking for suspicious behavior. The 
third is transparency: The public should know, if not immediately then 
eventually, what information the police are getting and how it is being 
used. And fourth, destruction. Any data the police obtains should be 
destroyed immediately after its court-authorized purpose is achieved. 
The police should not be able to hold on to it, just in case it might 
become useful at some future date.

This isn't about our ability to combat terrorism; it's about police 
power. Traditional law already gives police enormous power to peer into 
the personal lives of people, to use new crime-fighting technologies, 
and to correlate that information. But unfettered police power quickly 
resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit 
trail in our wake. This information has enormous social value -- not 
just for national security and law enforcement, but for purposes as 
mundane as using cell-phone data to track road congestion, and as 
important as using medical data to track the spread of diseases. Our 
challenge is to make this information available when and where it needs 
to be, but also to protect the principles of privacy and liberty our 
country is built on.

This essay originally appeared in the Minneapolis Star Tribune.


** *** ***** ******* *********** *************

      Truckers Watching the Highways



Highway Watch is yet another civilian distributed counterterrorism 
program.  Basically, truckers are trained to look out for suspicious 
activities on the highways.  Despite its similarities to such 
ill-conceived still-born programs like TIPS, I think this one has some 
merit.

Why?  Two things: training, and a broader focus than terrorism.  This 
is from their overview:  "Highway Watch(R) training provides Highway 
Watch(R) participants with the observational tools and the opportunity 
to exercise their expert understand of the transportation environment 
to report safety and security concerns rapidly and accurately to the 
authorities. In addition to matters of homeland security - stranded 
vehicles or accidents, unsafe road conditions, and other safety related 
situations are reported eliciting the appropriate emergence responders. 
Highway Watch(R) reports are combined with other information sources 
and shared both with federal agencies and the roadway transportation 
sector by the Highway ISAC."

Sure, the "matters of homeland security" is the sexy application that 
gets the press and the funding, but "stranded vehicles or accidents, 
unsafe road conditions, and other safety related situations" are likely 
to be the bread and butter of this kind of program.  And interstate 
truckers are likely to be in a good position to report these things, 
assuming there's a good mechanism for it.

About the training:  "Highway Watch(R) participants attend a 
comprehensive training session before they become certified Highway 
Watch(R) members. This training incorporates both safety and security 
issues. Participants are instructed on what to look for when witnessing 
traffic accidents and other safety-related situations and how to make a 
proper emergency report. Highway Watch(R) curriculum also provides 
anti-terrorism information, such as: a brief account of modern 
terrorist attacks from around the world, an outline explaining how 
terrorist acts are usually carried out, and tips on preventing 
terrorism. From this solid baseline curriculum, different segments of 
the highway sector have or are developing unique modules attuned to 
their specific security related situation."

Okay, okay, it does sound a bit hokey.  "...tips on preventing 
terrorism" indeed.  (Tip #7: When transporting nuclear wastes, always 
be sure to padlock your truck.  Tip #12:  If someone asks you to 
deliver a trailer to the parking lot underneath a large office building 
and run away very fast, always check with your supervisor first.)  But 
again, I like the inclusion of the mundane "what to look for when 
witnessing traffic accidents and other safety-related situations and 
how to make a proper emergency report."

This program has a lot of features I like in security systems: it's 
dynamic, it's distributed, it relies on trained people paying 
attention, and it's not focused on a specific threat.

Usually we see terrorism as the justification for something that is 
ineffective and wasteful.  Done right, this could be an example of 
terrorism being used as the justification for something that is smart 
and effective.

<http://www.highwaywatch.com/>


** *** ***** ******* *********** *************

      Snake-Oil Research in the Magazine "Nature"



Snake-oil isn't only in commercial products.  Here's a piece of 
research "Nature" that's just full of it.

The article suggests using chaos in an electro-optical system to 
generate a pseudo-random light sequence, which is then added to the 
message to protect it from interception.  Now, the idea of using chaos 
to build encryption systems has been tried many times in the 
cryptographic community, and has always failed.  But the authors of the 
"Nature" article show no signs of familiarity with prior cryptographic 
work.

The published system has the obvious problem that it does not include 
any form of message authentication, so it will be trivial to send 
spoofed messages or tamper with messages while they are in transit.

But a closer examination of the paper's figures suggests a far more 
fundamental problem.  There's no key.  Anyone with a valid receiver can 
decode the ciphertext.  No key equals no security, and what you have 
left is a totally broken system.

I e-mailed Claudio R. Mirasso, the corresponding author, about the lack 
of any key, and got this reply:  "To extract the message from the 
chaotic carrier you need to replicate the carrier itself. This can only 
be done by a laser that matches the emitter characteristics within, 
let's say, within 2-5%. Semiconductor lasers with such similarity have 
to be carefully selected from the same wafer. Even though you have to 
test them because they can still be too different and do not 
synchronize. We talk abut a hardware key. Also the operating conditions 
(current, feedback length and coupling strength) are part of the key."

Let me translate that.  He's saying that there is a hardware key baked 
into the system at fabrication.  (It comes from manufacturing 
deviations in the lasers.)   There's no way to change the key in the 
field. There's no way to recover security if any of the 
transmitters/receivers are lost or stolen.  And they don't know how 
hard it would be for an attacker to build a compatible receiver, or 
even a tunable receiver that could listen to a variety of encodings.

This paper would never get past peer review in any competent 
cryptography journal or conference.  I'm surprised it was accepted in 
"Nature," a fiercely competitive journal.  I don't know why "Nature" is 
taking articles on topics that are outside its usual competence, but it 
looks to me like "Nature" got burnt here by a lack of expertise in the 
area.

To be fair, the paper very carefully skirts the issue of security, and 
claims hardly anything: "Additionally, chaotic carriers offer a certain 
degree of intrinsic privacy, which could complement (via robust 
hardware encryption) both classical (software based) and quantum 
cryptography systems."  Now that "certain degree of intrinsic privacy" 
is approximately zero.  But other than that, they're very careful how 
they word their claims.

For instance, the abstract says: "Chaotic signals have been proposed as 
broadband information carriers with the potential of providing a high 
level of robustness and privacy in data transmission."  But there's no 
disclosure that this proposal is bogus, from a privacy 
perspective.  And the next-to-last paragraph says "Building on this, it 
should be possible to develop reliable cost-effective secure 
communication systems that exploit deeper properties of chaotic 
dynamics."  No disclosure that "chaotic dynamics" is actually 
irrelevant to the "secure" part.  The last paragraph talks about "smart 
encryption techniques" (referencing a paper that talks about chaos 
encryption), "developing active eavesdropper-evasion strategies" 
(whatever that means), and so on.  It's just enough that if you don't 
parse their words carefully and don't already know the area well, you 
might come away with the impression that this is a major advance in 
secure communications.  It seems as if it would hav
  e helped to have a more careful disclaimer.

Communications security was listed as one of the motivations for 
studying this communications technique.  To list this as a motivation, 
without explaining that their experimental setup is actually useless 
for communications security, is questionable at best.

Meanwhile, the press has written articles that convey the wrong 
impression.  Science News "article that lauds this as a big achievement 
for communications privacy.

It talks about it as a "new encryption strategy," "chaos-encrypted 
communication," "1 gigabyte of chaos-encrypted information per 
second."  It's obvious that the communications security aspect is what 
"Science News" is writing about.  If the authors knew that their scheme 
is useless for communications security, they didn't explain that very
well.

There is also a "New Scientist" article titled "Let chaos keep your 
secrets safe" that characterizes this as a "new cryptographic 
technique, " but I can't get a copy of the full article.

Here are two more articles that discuss its security benefits.  In the 
latter, Mirasso says "the main task we have for the future" is to 
"define, test, and calibrate the security that our system can offer."

And their project website says that "the continuous increase of 
computer speed threatens the safety" of traditional cryptography (which 
is bogus) and suggests using physical-layer chaos as a way to solve 
this.  That's listed as the goal of the project.

There's a lesson here.  This is research undertaken by researchers with 
no prior track record in cryptography, submitted to a journal with no 
background in cryptography, and reviewed by reviewers with who knows 
what kind of experience in cryptography.  Cryptography is a subtle 
subject, and trying to design new cryptosystems without the necessary 
experience and training in the field is a quick route to insecurity.

And what's up with "Nature"?  Cryptographers with no training in 
physics know better than to think they are competent to evaluate 
physics research.  If a physics paper were submitted to a cryptography 
journal, the authors would likely be gently redirected to a physics 
journal -- we wouldn't want our cryptography conferences to accept a 
paper on a subject they aren't competent to evaluate.  Why would 
"Nature" expect the situation to be any different when physicists try 
to do cryptography research?
Nature article (pay only; sorry):
<http://www.nature.com/nature/journal/v438/n7066/full/nature04275.html> 
or <http://tinyurl.com/8y33u>

Other articles:
<http://www.sciencenews.org/articles/20051119/fob5.asp>
<http://www.newscientist.com/channel/fundamentals/mg18825262.000>
<http://www.physorg.com/news8355.html>
<http://optics.org/articles/news/11/11/13/1>

Project website:
<http://nova.uib.es/project/occult/nav1/presentation.html>


** *** ***** ******* *********** *************

      Counterpane News



Schneier has no speaking engagements between now and January 15.  Happy 
holidays, everyone.

Counterpane and LogLogic announce a partnership:
<http://www.counterpane.com/pr-20051121.html>


** *** ***** ******* *********** *************

      Twofish Cryptanalysis Rumors



Recently I have been hearing some odd "Twofish has been broken" 
rumors.  I thought I'd quell them once and for all.

Rumors of the death of Twofish have been greatly exaggerated.

The analysis in question is by Shiho Moriai  and Yiqun Lisa Yin, who 
published their results in Japan in 2000.  Recently, someone either got 
a copy of the paper or heard about the results, and rumors started 
spreading.

The actual paper presents no cryptanalytic attacks, only some 
hypothesized differential characteristics.  Moriai and Yin discovered 
byte-sized truncated differentials for 12- and 16-round Twofish (the 
full cipher has 16 rounds), but were unable to use them in any sort of 
attack.  They also discovered a larger, 5-round truncated differential. 
No one has been able to convert these differentials into an attack, and 
Twofish is nowhere near broken.  On the other hand, they are excellent 
and interesting results -- and it's a really good paper.

In more detail, here are the paper's three results:

1.  The authors show a 12-round truncated differential characteristic 
that predicts that the 2nd byte of the ciphertext difference will be 0 
when the plaintext difference is all-zeros except for its last 
byte.  They say the characteristic holds with probability 
2^-40.9.  Note that for an ideal cipher, we expect the 2nd byte of 
ciphertext to be 0 with probability 2^-8, just by chance.  Of course, 
2^-8 is much, much larger than 2^-40.9.  Therefore, this is not 
particularly useful in a distinguishing attack.

One possible interpretation of their result would be to conjecture that 
the 2nd byte of ciphertext difference will be 0 with probability 2^-8 + 
2^-40.9 for Twofish, but only 2^-8 for an ideal cipher.  Their 
characteristic is just one path.  If one is lucky, perhaps all other 
paths behave randomly and contribute an additional 2^-8 factor to the 
total probability of getting a 0 in the 2nd byte of ciphertext 
difference.  Perhaps.  One might conjecture that, anyway.

It is not at all clear whether this conjecture is true, and the authors 
are careful not to claim it.  If it were true, it might lead to a 
theoretical distinguishing attack using 2^75 chosen plaintexts or so 
(very rough estimate).  But I'm not at all sure that the conjecture is 
true.

2.  They show a 16-round truncated differential that predicts that the 
2nd byte of the ciphertext difference will be 0 (under the same input 
difference).  Their characteristic holds with probability 2^-57.3 (they 
say).  Again, this is not very useful.

Analogously to the first result, one might conjecture that the 2nd byte 
of the ciphertext difference will be 0 with probability 2^-8 + 2^-57.3 
for Twofish, but probability 2^-8 for an ideal cipher.  If this were 
true, one might be able to mount a distinguishing attack with 2^100 
chosen plaintexts or so (another very rough estimate).  But I have no 
idea whether the conjecture is true.

3.  They also show a 5-round truncated differential characteristic that 
predicts that the input difference that is non-zero everywhere except 
in its 9th byte will lead to an output difference of the same 
form.  This characteristic has probability 2^-119.988896, they say (but 
they also say that they have made some approximations, and the actual 
probabilities can be a little smaller or a little larger).  Compared to 
an ideal cipher, where one would expect this to happen by chance with 
probability 2^-120, this isn't very interesting.  It's hard to imagine 
how this could be useful in a distinguishing attack.

The paper theorizes that all of these characteristics might be useful 
in an attack, but I would be very careful about drawing any 
conclusions.  It can be very tricky to go from single-path 
characteristics whose probability is much smaller than the chances of 
it happening by chance in an ideal cipher, to a real attack.  The 
problem is in the part where you say "let's just assume all other paths 
behave randomly."  Often the other paths do not behave randomly, and 
attacks that look promising fall flat on their faces.

We simply don't know whether these truncated differentials would be 
useful in a distinguishing attack.  But what we do know is that even if 
everything works out perfectly to the cryptanalyst's benefit, and if an 
attack is possible, then such an attack is likely to require a totally 
unrealistic number of chosen plaintexts.  2^100 plaintexts is something 
like a billion billion DVDs' worth of data, or a T1 line running for a 
million times the age of the universe.  (Note that these numbers might 
be off by a factor of 1,000 or so.  But honestly, who cares?  The 
numbers are so huge as to be irrelevant.)  And even with all that data, 
a distinguishing attack is not the same as a key recovery attack.

Again, I am not trying to belittle the results.  Moriai and Yin did 
some great work here, and they deserve all kinds of credit for it.  But 
even from a theoretical perspective, Twofish isn't even remotely 
broken.  There have been no extensions to these results since they were 
published five years ago.  The best Twofish cryptanalysis is still the 
work we did during the design process.

Moriai-Lin paper:
<http://www.schneier.com/twofish-analysis-shiho.pdf>

Twofish home page:
<http://www.schneier.com/twofish.html>


** *** ***** ******* *********** *************

      Totally Secure Classical Communications?



How would you feel if you invested millions of dollars in quantum 
cryptography, and then learned that you could do the same thing with a 
few 25-cent Radio Shack components?

I'm exaggerating a little here, but if a new idea out of Texas A&M 
University turns out to be secure, we've come close.

Earlier this month, Laszlo Kish proposed securing a communications 
link, like a phone or computer line, with a pair of resistors. By 
adding electronic noise, or using the natural thermal noise of the 
resistors -- called "Johnson noise" -- Kish can prevent eavesdroppers 
from listening in.

In the blue-sky field of quantum cryptography, the strange physics of 
the subatomic world are harnessed to create a secure, unbreakable 
communications channel between two points. Kish's research is 
intriguing, in part, because it uses the simpler properties of classic 
physics -- the stuff you learned in high school -- to achieve the same 
results.

At least, that's the theory. Here's how the scheme works:

Alice and Bob have a two-wire cable between them, and two resistors 
each -- we'll say they each have a 10-ohm and a 1,000-ohm resistor. 
Alice connects a stochastic voltage generator and a resistor in series 
to each of the two wires. That's the setup.

Here's how they communicate. At each clock tick, both Alice and Bob 
randomly choose one of their two resistors and put it in the circuit. 
Then, Alice and Bob both measure the current flowing through the 
circuit. Basically, it's inversely proportional to the sum of their two 
chosen resistors: 20 ohms, 1,010 ohms or 2,000 ohms. Of course, the 
eavesdropper can measure the same thing.

If Alice and Bob choose the same size resistor, then the eavesdropper 
knows what they have chosen, so that clock tick is useless for 
security. But if they choose a different size resistor, the 
eavesdropper cannot tell whether it is Alice choosing 10 ohms and Bob 
1,000 ohms, or the reverse. Of course, Alice and Bob know, because they 
know which resistor they're choosing. This happens 50 percent of the
time.

Alice and Bob keep only the data from the clock ticks where they choose 
a different size resistor. From each such clock tick, they can derive 
one secret key bit, according to who chooses the 10-ohm resistor and 
who the 1,000-ohm. That's because they know who's choosing which and 
the eavesdropper doesn't. Do it enough times and you've got key 
material for a one-time pad (or anything else) to encrypt the 
communications link.

I've simplified it a bit, but that's the gist of it.

Interestingly enough, this key-generation mechanism is actually very 
similar to one described by Bennett and Brassard in the early 1980s 
using quantum properties (see Applied Cryptography, second edition, 
pages 554 to 557), but this one is all classical. That's what makes it 
neat.

It's also reminiscent of a 1940s scheme from Bell Labs. Details of that 
system are either classified or lost, but James Ellis described it in 
1987 as inspiring his invention of public-key cryptography back in the 
early 1970s:  "The event which changed this view was the discovery of a 
wartime, Bell-Telephone report by an unknown author describing an 
ingenious idea for secure telephone speech (reference 2). It proposed 
that the recipient should mask the sender's speech by adding noise to 
the line. He could subtract the noise afterwards since he had added it 
and therefore knew what it was."

That "reference 2" is something published by Bell Labs called Final 
Report on Project C43. No one I know has seen a copy. Bell Labs 
cryptographers have searched the archives for it, and they came up 
empty-handed.

Did Kish rediscover a secure communications system from the 1940s? Or 
is this a retro-discovery: an idea that by all rights should have 
emerged in the 1940s, but somehow evaded human epiphany until now?

And most importantly, is it secure?

Short answer: There hasn't been enough analysis. I certainly don't know 
enough electrical engineering to know whether there is any clever way 
to eavesdrop on Kish's scheme. And I'm sure Kish doesn't know enough 
security to know that, either. The physics and stochastic mathematics 
look good, but all sorts of security problems crop up when you try to 
actually build and operate something like this.

It's definitely an idea worth exploring, and it'll take people with 
expertise in both security and electrical engineering to fully vet the 
system.

There are practical problems with the system, though. The bandwidth the 
system can handle appears very limited. The paper gives the 
bandwidth-distance product as 2 x 10^6 meter-Hz. This means that over a 
1-kilometer link, you can only send at 2,000 bps. A dialup modem from 
1985 is faster. Even with a fat 500-pair cable you're still limited to 
1 million bps over 1 kilometer.

And multi-wire cables have their own problems; there are all sorts of 
cable-capacitance and cross-talk issues with that sort of link. Phone 
companies really hate those high-density cables, because of how long it 
takes to terminate or splice them.

Even more basic: It's vulnerable to man-in-the-middle attacks. Someone 
who can intercept and modify messages in transit can break the 
security. This means you need an authenticated channel to make it work 
-- a link that guarantees you're talking to the person you think you're 
talking to. How often in the real world do we have a wire that is 
authenticated but not confidential? Not very often.

Generally, if you can eavesdrop you can also mount active attacks. But 
this scheme only defends against passive eavesdropping.

For those keeping score, that's four practical problems: It's only link 
encryption and not end-to-end, it's bandwidth-limited (but may be 
enough for key exchange), it works best for short ranges and it 
requires authentication to make it work. I can envision some 
specialized circumstances where this might be useful, but they're few 
and far between.

But quantum key distributions have the same problems. Basically, if 
Kish's scheme is secure, it's superior to quantum communications in 
every respect: price, maintenance, speed, vibration, thermal resistance 
and so on.

Both this and the quantum solution share another problem, however; 
they're solutions looking for a problem. In the realm of security, 
encryption is the one thing we already do pretty well. Focusing on 
encryption is like sticking a tall stake in the ground and hoping the 
enemy runs right into it, instead of building a wide wall.

Arguing about whether this kind of thing is more secure than AES -- the 
United States' national encryption standard -- is like arguing about 
whether the stake should be a mile tall or a mile and a half tall. 
However tall it is, the enemy is going to go around the stake.

Software security, network security, operating system security, user 
interface -- these are the hard security problems. Replacing AES with 
this kind of thing won't make anything more secure, because all the 
other parts of the security system are so much worse.

This is not to belittle the research. I think information-theoretic 
security is important, regardless of practicality. And I'm thrilled 
that an easy-to-build classical system can work as well as a sexy, 
media-hyped quantum cryptosystem. But don't throw away your crypto 
software yet.

<http://tees.tamu.edu/portal/page?_pageid=37,3347&_dad=portal&_schema=PO

RTAL&p_news_id=1268>

Paper:
<http://arxiv.org/ftp/physics/papers/0509/0509136.pdf>

SlashDot discussion:
<http://it.slashdot.org/article.pl?sid=05/12/10/1714256>

This essay originally appeared on Wired.com:
<http://www.wired.com/news/privacy/0,1848,69841,00.html>


** *** ***** ******* *********** *************

      Comments from Readers



From: "WJK" <[log in to unmask]>
Subject: RE: CRYPTO-GRAM, November 15, 2005

While I agree with you in concept, the idea that all software should be 
secure is almost impossible for the small- or medium-sized 
manufacturer.  In my situation, I use many purchased controls (such as 
a grid) within my program.  I use a compiler purchased from 
Microsoft.  If I write perfectly secure code, I still have the 
possibility of intrusion through errors created by either of those 
multiple entities.  If that happens, then where does the user go when I 
point the finger at my supplier?  Same frustration you encounter when 
it is either a hardware failure or software failure. Throw up your 
hands, buy a new one.

Further, my volume of software in use is not large enough to support 
hundreds of thousands of dollars of testing by cyber criminal 
types.  What you are saying is that there is no room in the marketplace 
for the small business or programmer consultant.  As Microsoft moves 
into more of a consulting role, they will face the same problems and 
issues; the single client is not able to afford development costs that 
insure absolute security, nor should they.

I believe that the operating system has to step up to the plate and 
protect the applications from alteration by criminal 
elements.  Developers needs to be able to lock down their code as it 
leaves their business.  The operating system developers have both the 
volume and financial capability to play the role of software cop.  By 
ensuring multiple choices, we further reduce the chance of total 
collapse caused by a single piece of clever miscreant coding.  Multiple 
software choices actually provides security in and of
itself.



From: Ben Giddings <[log in to unmask]>
Subject: Re: CRYPTO-GRAM, November 15, 2005: RFID Passports

I'm a software engineer working at a RFID reader company that designs 
UHF frequency RFID devices.  I'm not an RF engineer, nor do I have much 
experience with HF tags (like ISO 14443 devices) but it sounds like 
you're really spreading some misinformation.

Passive RFID tags are powered by the reader, and HF tags are powered by 
induction; this severely limits their range.  I don't know what was 
seen at 69 feet, but I sincerely doubt that it was a reader powering a 
tag at that distance.

The ISO 14443 standard uses a 13.56 MHz signal, with a wavelength of 
about 22m.  ISO tags are powered by inductive coupling in the reactive 
near field, where power drops off with 1/d^3.  This means that since 
the standard read range is 0.1m (10cm), to increase that distance to 
1m, you would need to supply 1000x more power.  Since the power 
supplied to an antenna is normally limited by FCC rules to 1 Watt, this 
would mean you'd require 1 KW to power them at 1m, or 1MW at 10m.  You 
may be able to eavesdrop on the signal at a long distance, but unless I 
completely misunderstand this stuff, you won't be powering it at that 
distance.  Technology gets better, but physics just doesn't change.

As for the "secrecy" of the protocol, I simply went to the ISO site and 
searched for 14443, voila, documents.  Sure, you have to pay for them, 
which I find to be a real pain for standards, but right there in the 
search results, "Part 3: Initialization and anticollision."

I think you're right that RFID tags in passports are a bad idea. You're 
also right that they need to do more to make it difficult for people to 
read the tags, and decrypt the data they contain.  On the other hand, 
we shouldn't be completely paranoid.  People routinely give out their 
passport numbers in insecure online forms to book hotels, etc.  Really, 
on its own, the number isn't too helpful.  It's not *good* to just give 
it out, but it isn't the end of the world either.  If it's easier to 
pick someone's pocket to get their passport than it is to read it from 
a distance then the security of RFID-based tags is probably good enough.



From: Carlo Graziani <[log in to unmask]>
Subject: Re: CRYPTO-GRAM, November 15, 2005

I read with interest your economic analysis of the perverse incentive 
system that gives home PC users such appalling security.  I agree, by 
and large, although I must say that I believe the analysis does not 
actually get to its final destination.

It strikes me as crazy to pretend that users and ISPs have no 
responsibility whatsoever for the bad behavior of home computers. 
Windows security is undoubtedly awful, but even an OpenBSD box can be 
compromised if its administrator's security policy is poor.

At the moment, if some small business gets their website DDOS-ed by 
some hacker's botnet, they have no recourse whatsoever. They bear the 
entire cost of a situation they did nothing to create, even if their 
site is secure.

If they were allowed to hold liable the ISPs hosting computers that 
participated in the attack, if those ISPs did nothing to detect and 
thwart it, then those ISPs would start serious malware 
activity-detection programs, and would automatically disconnect from 
the net any computer that suddenly started sending thousands of e-mail 
messages per hour, or started indiscriminately portscanning entire 
Class-B networks, or triggered any one of a dozen other "misbehavior" 
criteria.

Then, when your mom (or mine, for that matter) complained to her ISP 
that her "Internet doesn't work any more" and was told of the reason, 
and informed that there's a clean reinstall of the OS in her future, 
and a bond to be posted that will be forfeit on the next offense, she'd 
get mad at whoever sold her her software. Possibly legally mad. 
Multiply that by millions of moms (OK, dads too), and suddenly you have 
a serious and urgent reason for software vendors to get serious about 
security.

You might also wind up creating an industry of low-cost, bonded home PC 
security consultants, who could be hired to install firewalls, scan for 
active ports, check for rootkits, create customized "known-good" disk 
images for quick restores of compromised systems, etc.  Home malware 
insurance might also suddenly spring up.  These might arguably be good 
outcomes.

The point is, you can't secure the Internet against incompetent 
operators by shifting all liability to manufacturers, any more than you 
can secure the highway system against incompetent drivers by shifting 
all liability to automobile manufacturers.

All you can --- and should --- demand of the industry is diligence. But 
even if the industry hired Bruce Schneier and Theo De Raadt to form a 
committee to vet and sign off on every version of every OS, users would 
still get rooted and exploited because of their own ineptitude.

Networked computers are not toasters. If ineptly managed, they damage 
the entire commons, not just the operator. I don't really know what the 
best way is to ensure that good system management practices are 
widespread, but I'm pretty sure that protecting all users from the 
costs incurred due to their bad computing practices perpetuates this 
new variant of the Tragedy of the Commons.



From: Andrewwhitby <[log in to unmask]>
Subject: Re: CRYPTO-GRAM, November 15, 2005

Poor software quality is not the only externality at work and possibly 
not the most important. Unfortunately, even as a user the costs of my 
security decisions are borne by others. If I fail to apply a patch and, 
as a result, my computer infected by a worm, I may suffer a personal 
cost. But so will the company whose web server is taken down in a DDOS 
attack using my machine, or the individuals who receive spam sent via 
my machine. Because the social cost of poor security exceeds the 
private cost, a rational person will choose a level of security that is 
socially inefficient (too low).

So even if we had a perfectly competitive software market, users may 
still choose software that, from a social perspective, is inefficiently 
insecure.

This seems particularly likely for home and small business users, where 
the cost of good security may be high compared to the expected loss 
resulting from poor security. Because the private cost of insecurity 
for large companies is high, relative to the cost of being secure, they 
are likely to demand more secure software. For home and small business 
users, the private cost of insecurity is lower, so they are less likely 
to demand secure software. In the best case, we might expect a trickle 
down effect from corporate software (witness the better security 
features of Windows XP, a corporate platform adapted for home use, 
compared to 95/98). But even then, because the security features are 
developed with large customers in mind, they are hard to configure 
correctly for the average home user.

Incidentally, the same logic provides a rationale for government 
funding of vaccination programs. There's no benefit to me in preventing 
you from dying of polio, but there is in preventing you from catching 
it and spreading it to me. However, just as personal liability for 
spreading infection unintentionally hasn't caught on, it seems unlikely 
to in this case.



From: Phil Karn <[log in to unmask]>
Subject: comments on passports and software liability

You've written a lot about liability for security holes, but I have yet 
to see you address such liability for open source authors.

It's bad enough that open source volunteers continually risk being sued 
for infringing patents they might not know anything about. Now you also 
want to hold them liable for unintentional security holes? As the old 
saying goes, no good deed goes unpunished. Writing useful code and 
giving it away is a social good that ought to be encouraged, not
punished.

Would the individual author of a piece of open source code be 
personally liable for an unintentional security vulnerability? Probably 
not, given what you said about how the liability should devolve on 
corporations, not individual programmers. But what about companies like 
Red Hat who bundle and market open source projects?

What about nonprofits like the Free Software Foundation, or Software in 
the Public Interest, who runs the Debian Linux project? Would it really 
be fair to hold them all legally liable for previously undetected 
security bugs in the software they distribute? How long would they stay 
around if they were?

I really think the better approach is *disclosure*, not liability. We 
may have no choice but to make companies like Microsoft responsible for 
the security holes in their products, because only they are in a 
position to fix them. And while Microsoft doesn't have a monopoly on 
*finding* holes, having the source code certainly gives them an 
advantage. They need an incentive to actually do it.

But open source is fundamentally different. There are alternatives to 
liability. Everyone is on the same playing field. Anyone can look at 
the source, find *and fix* security holes.

Way back when I used to assemble Heathkits, I remember some excellent 
advice they used to give over and over again for when you have trouble: 
have someone else check your work. Someone who isn't as close to it as 
you are will often quickly spot a mistake that you've repeatedly 
missed. This is just as true for software as for hardware. For this 
reason, I think that publishing your source code, with permission to 
others to find and fix it, should get you off the hook with regard to 
any unintentional security holes. You've done all you can to help 
others find them for you, something we all know the authors cannot 
always do for themselves.

On another topic, passports, I don't think that merely randomizing the 
serial numbers for the CSMA algorithm is enough, as that would still 
let you detect the presence of an anonymous passport at a distance. I 
loved your scenario of a terrorist bomb automatically detonating when 
it detects four or more American passports. But most locals don't carry 
passports at all. Only foreigners do. So designing the bomb to detonate 
when it detects some number of *any* kind of passport may be almost as 
effective, especially if it's known that the tourists in a given area 
are primarily American or British, say.


** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
insights, and commentaries on security: computer and otherwise.  You 
can subscribe, unsubscribe, or change your address on the Web at 
<http://www.schneier.com/crypto-gram.html>.  Back issues are also 
available at that URL.

Comments on CRYPTO-GRAM should be sent to 
[log in to unmask]  Permission to print comments is assumed 
unless otherwise stated.  Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who 
will find it valuable.  Permission is granted to reprint CRYPTO-GRAM, 
as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is the author of 
the best sellers "Beyond Fear," "Secrets and Lies," and "Applied 
Cryptography,"  and an inventor of the Blowfish and Twofish 
algorithms.  He is founder and CTO of Counterpane Internet Security 
Inc., and is a member of the Advisory Board of the Electronic Privacy 
Information Center (EPIC).  He is a frequent writer and lecturer on 
security topics.  See <http://www.schneier.com>.

Counterpane is the world's leading protector of networked information - 
the inventor of outsourced security monitoring and the foremost 
authority on effective mitigation of emerging IT threats. Counterpane 
protects networks for Fortune 1000 companies and governments 
world-wide.  See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter.  Opinions expressed are not 
necessarily those of Counterpane Internet Security, Inc.

Copyright (c) 2005 by Bruce Schneier.

-- 
This message has been scanned for viruses and dangerous
content by the NorMAN MailScanner Service and is believed
to be clean.

The NorMAN MailScanner Service is operated by Information
Systems and Services, University of Newcastle upon Tyne.


====
This e-mail is intended solely for the addressee. It may contain private and
confidential information. If you are not the intended addressee, please take
no action based on it nor show a copy to anyone. Please reply to this e-mail
to highlight the error. You should also be aware that all electronic mail
from, to, or within Northumbria University may be the subject of a request
under the Freedom of Information Act 2000 and related legislation, and
therefore may be required to be disclosed to third parties.
This e-mail and attachments have been scanned for viruses prior to leaving
Northumbria University. Northumbria University will not be liable for any
losses as a result of any viruses being passed on.

************************************************************************************
Distributed through Cyber-Society-Live [CSL]: CSL is a moderated discussion
list made up of people who are interested in the interdisciplinary academic
study of Cyber Society in all its manifestations.To join the list please visit:
http://www.jiscmail.ac.uk/lists/cyber-society-live.html
*************************************************************************************

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
June 2022
May 2022
March 2022
February 2022
October 2021
July 2021
June 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
July 2020
June 2020
May 2020
April 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager