JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE Archives

CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE  2000

CYBER-SOCIETY-LIVE 2000

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

[CSL] From Gutenberg to the Global Information Infrastructure

From:

John Armitage <[log in to unmask]>

Reply-To:

[log in to unmask]

Date:

Mon, 22 May 2000 11:21:11 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (1353 lines)

Forward From: Phil Agre [mailto:[log in to unmask]] 
Sent: Saturday, May 20, 2000 8:06 PM
To: Red Rock Eater News Service
Subject: From Gutenberg to the Global Information Infrastructure


[Heavily reformatted; apologies for any glitches.]

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
This message was forwarded through the Red Rock Eater News Service (RRE).
You are welcome to send the message along to others but please do not use
the "redirect" option.  For information about RRE, including instructions
for (un)subscribing, see http://dlis.gseis.ucla.edu/people/pagre/rre.html
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Date: Sat, 20 May 2000 11:22:20 -0700
From: Christine Borgman <[log in to unmask]>


  From Gutenberg to the Global Information Infrastructure:
  Access to Information in the Networked World

  Christine L. Borgman

  MIT Press, March 2000

  http://mitpress.mit.edu/book-home.tcl?isbn=026202473X


  Table of Contents

  1  The Premise and the Promise of a Global Information Infrastructure
  2  Is It Digital or Is It a Library? Digital Libraries and
       Information Infrastructure
  3  Access to Information
  4  Books, Bytes, and Behavior
  5  Why Are Digital Libraries Hard to Use?
  6  Making Digital Libraries Easier to Use
  7  Whither, or Wither, Libraries?
  8  Acting Locally, Thinking Globally
  9  Toward a Global Digital Library: Progress and Prospects


Chapter 1

The Premise and the Promise of a Global Information Infrastructure

  Let us build a global community in which the people of neighboring
  countries view each other not as potential enemies, but as potential
  partners, as members of the same family in the vast, increasingly
  interconnected human family.  -- Vice-President Al Gore (1994a)

  The information society has the potential to improve the quality of
  life of Europe's citizens, the efficiency of our social and economic
  organization and to reinforce cohesion.  -- Bangemann Report (1994)

The premise of a global information infrastructure is that
governments, businesses, communities, and individuals can cooperate
to link the world's telecommunication and computer networks together
into a vast constellation capable of carrying digital and analog
signals in support of every conceivable information and communication
application.  The promise is that this constellation of networks will
promote an information society that benefits all: peace, friendship,
and cooperation through improved interpersonal communications;
empowerment through access to information for education, business,
and social good; more productive labor through technology-enriched
work environments; and stronger economies through open competition in
global markets.

The promise is exciting and the premise appears rational.  Information
technologies are advancing at a rapid pace and becoming ever more
ubiquitous.  Many scholars, policy makers, technologists, business
people, and pundits contend that changes wrought by these new
technologies are revolutionary and will result in profound
transformations of society.  Physical location will cease to matter.
More and more human activities in working, learning, conducting
commerce, and communicating will take place via information
technologies.  Online access to information resources will provide
a depth and breadth of resources never before possible.  Most print
publication will cease; electronic publication and distribution
will become the norm.  Libraries, archives, museums, publishers,
bookstores, schools, universities, and other institutions that rely on
artifacts in physical form will be transformed radically or will cease
to exist.  Fundamental changes are predicted in the relationships
between these institutions, with authors less dependent on publishers,
information seekers less dependent on libraries, and universities less
dependent on traditional models of publication to evaluate scholarship.
Networks will grease the wheels of commerce, improve education,
increase the amount of interpersonal communication, provide
unprecedented access to information resources and to human expertise,
and lead to greater economic equity.

In contrast, others argue that we are in the process of evolutionary,
not revolutionary, social change toward an information-oriented
society.  People make social choices which lead to the development of
desired technologies.  Computer networks are continuations of earlier
communication technologies such as the telegraph and telephone,
radio and television, and similar devices that rely on networked
infrastructures.  All are dependent on institutions, and these evolve
much more slowly than do technologies.  Digital and digitized media
are extensions of earlier media, and the institutions that manage them
will adapt them to their practices as they have adapted many media
before them.  Electronic publishing will become ever more important,
but only for certain materials that serve certain purposes.  Print
publishing will co-exist with other forms of distribution.  Although
relationships between institutions will evolve, publishers, libraries,
and universities serve gatekeeping functions that will continue
to be essential in the future.  More activities will be conducted
online, with the result that face-to-face relationships will become
ever more valued and precious.  Telecommuting, distance-independent
learning, and electronic commerce will supplement, but not supplant,
physical workplaces, classrooms, and shopping malls.  Communication
technologies often increase, rather than decrease, inequities, and
we should be wary of the economic promises of a global information
infrastructure.

Which of these scenarios is more likely to occur?  Proponents of each
offer historical precedent and argue rationally for their cases.  Many
other scenarios exist, some between those presented above and some at
the far ends of the spectrum.  The extremes include science-fiction-
like scenarios in which technology controls all aspects of daily
life, resulting in a police state where every activity is monitored,
and survivalist scenarios in which some catastrophe destroys all
technology, with the result that new societies are reinvented
without it.  The science fiction and survivalist scenarios are easily
discounted because checks and balances are in place to prevent them.
Choosing between the revolutionary, discontinuity scenario and the
evolutionary, continuity scenario described above is more problematic.
Each has merit and each is the subject of scholarly inquiry and
informed public debate.

In view of the undisputed magnitude of some of these developments, it
is reasonable to speak of a new world emerging.  It is not reasonable,
however, to conclude that these changes are absolute, that they will
affect all people equally, or that no prior practices or institutions
will carry over to a new world.  Nor is it reasonable to assume that
any individual institutions, whether libraries, archives, museums,
universities, schools, governments, or businesses, will survive
unscathed and unchanged into the next millennium.  Strong claims in
either direction are dangerous and misleading, as well as lacking in
intellectual rigor.  The arguments for these scenarios, the underlying
assumptions, and the evidence offered must be examined.  Upon
close examination, it will often be found that strong claims about
the effects of information technologies on society, and vice versa,
are based on simplistic assumptions about technology, behavior,
organizations, and economics.  None of these factors exists in a
vacuum; they interact in complex and often unpredictable ways.

I argue throughout this book that the most likely future scenario
lies somewhere between the discontinuity and continuity scenarios.
Information technology makes possible all sorts of new activities
and new ways of doing old activities.  But people do not discard all
their old habits and practices with the advent of each new technology.
Nor are new technologies created without some expectations of how they
will be employed.  The probable scenario is neither revolution nor
evolution, but co-evolution of information technology, human behavior,
and organizations.  People select and implement technologies that are
available and that suit their practices and goals.  As they use them,
they adapt them to suit their needs, often in ways not anticipated
by their designers.  Designers develop new technologies on the basis
of technological advances, marketing data, available standards, human
factors studies, and educated guesses about what will sell.  Products
evolve in parallel with the uses for which they are employed.  To use
a simplistic aphorism: Technology pushes, while demand pulls.

The central concern of this book is access to information in a
networked world.  Information access is among the primary arguments
for constructing a global information infrastructure.  Information
resources are essential for all manner of human affairs, including
commerce, education, research, participatory democracy, government
policy, and leisure activities.  Access to information for all these
purposes is at the center of the discontinuity-continuity debates.
Some argue that computer networks, digital libraries, electronic
publishing, and similar developments will lead to radically different
models of information access.  The technologies of creation,
distribution, and preservation will undergo dramatic transformation,
as will information institutions such as libraries, archives, museums,
schools, and universities.  Relationships among these and other
stakeholders, including authors, readers, users, and publishers, will
evolve as well.  Others argue that stakeholders, relationships, and
practices are so firmly entrenched that structural changes will be
slow and incremental because most new technologies are variations on
those that came before.  My view is that some degree of truth exists
in each of these statements.  These and other arguments are examined
throughout the book.

Much has been written about technology, human behavior, and policy
regarding access to information.  Most of the writing, however,
focuses on one of these three aspects with little attention to the
other two.  In this book I endeavor to bring all three together,
drawing on themes, theories, results, and practices from multiple
disciplines and perspectives to illustrate the complex challenges that
we face in creating a global information infrastructure.  Technical
issues in digital libraries and information retrieval systems are
addressed, but not in the depth provided in recent books by Lesk
(1997a) and Korfhage (1997).  Nor are design issues addressed to
the degree covered by Winograd et al. (1996).  Information-related
behavior in electronic environments is covered, but in less depth
than in Marchionini 1995.  Institutional and organizational issues are
treated more fully in Bishop and Star 1996, Bowker et al. 1996, and
Sproull and Kiesler 1991.  Policy issues of the Internet are addressed
in more depth in Branscomb and Kahin 1995, Kahin and Abbate 1995, and
Kahin and Keller 1995.  In this book I draw on these and many other
resources to weave a rich discussion of access to information in a
networked world.  In view of the early stages of these developments,
more questions are raised than yet can be answered.  My hope is to
provoke informed discussion between the many interested parties around
the world.

Converging Tasks and Technologies

People use computer networks for a vast array of activities, such
as communicating with other individuals and groups, performing tasks
requiring remote resources, exchanging resources, and entertainment
(whether with interactive games or passive media such as videos).
Among the few common threads in predictions of future technology (see,
e.g., Next 50 Years 1997 and Pontin 1998) is that we will see more
convergence of information and communication technologies, blurring
the lines between tasks and activities and between work and play.
We will have "ubiquitous computing" (Pontin 1998) and "pervasive
information systems" (Birnbaum 1997).  We will become "intimate with
our technology" (Hillis 1997), and "information overload" (Berghel
1997a) will be more of a problem than ever.

An underlying theme of such predictions is "digital convergence",
indicating that more and more information products will be created in
digital form or will be digitized, allowing applications to be blended
more easily.  Digital technologies will co-exist with analog and
other forms of information technologies yet to be invented.  Analog
technology is based on continuous flows, rather than the discrete
bits of digital technology.  Computer and communication networks
are an example of the bridge between these technologies.  The word
"modem" was coined from "modulate" and "demodulate", which describe
the device's function in converting digital data produced by computers
into analog signals that could be sent over telephone lines designed
for voice communication and vice versa.  Predictions of ubiquitous
computing are based on an increasing reliance on small communication
devices and embedded systems such as those that control heating and
lighting in homes and offices.  Future computer networks are expected
to link these devices just as they now link personal computers, data
storage, printers, and other peripherals (Pontin 1998).

Modes of Communication

No matter what technologies gird the framework of the global
information infrastructure, human activities involving the network
will be intertwined.  As the editors of Wired magazine (1997, p. 14)
put it,

  ... broader and deeper new interfaces for electronic media are
  being born.  ...  What they share are ways to move seamlessly
  between media you steer (interactive) and media that steer you
  (passive).  ...  These new interfaces work with existing media,
  such as TV, yet they also work on hyper-linked text.  But most
  important, they work on the emerging universe of networked media
  that are spreading across the telecosm.

Despite the hyperbole, this quotation highlights a useful distinction
between "pull" technology (which requires explicit action by the user)
and "push" technology (which comes to the user without the user's
explicit action).  Some activities are easily categorized by this
dichotomy, but others have characteristics of each.  Composing and
sending an email message and searching a database require explicit
"pull" actions, for example.  Although both the broadcast mass media
and the emerging media services that deliver tailored selections of
content to workstations during idle time can be classified as push
technologies (editors of Wired 1997), the latter form also could
be considered "pull", because the user presumably took action to
subscribe to the service.  Similarly, if composing and sending email
is pull technology, then receiving mail can be viewed as a form
of "push".  Opening and reading messages requires explicit actions,
but users can decide what to read, delete, or ignore.  They also can
sort desirable and undesirable messages by means of automatic filters.
Because subscribing to desirable content and filtering out undesirable
content require parallel actions, both can be viewed as forms of push
technology if one accepts the Wired definitions of "push" and "pull". 

Push and pull combine in other ways as well.  People subscribe to
distribution lists, which then send messages at regular or irregular
intervals.  They also subscribe to services that alert them when new
resources are posted on a specific network site, but they must take
explicit action to view or retrieve the resources from that site.

Truly interactive forms of communication are difficult to categorize
as push or pull.  People engage in conversations in "chat rooms",
play roles in MUDS and MOOS, and hold conferences, meetings, and
classes online in real time.  All require explicit actions, but
the characteristics of these two-way or multi-way conversations are
far richer than the solo-action pull of searching a database or
sending a message.  Some of these are the "demassified" communication
technologies that Rogers (1986) predicted more, tailored to individual
users and to small audiences.  However, the "push" technologies of
customized desktop news delivery touted by Wired in 1997, in which
messages continually scroll across the subscriber's screen, have yet
to become the commercial success that was predicted.  Perhaps they
were not sufficiently customized or "demassified".  Perhaps people
found them too disruptive, preferring "pull" modes in which they could
acquire the desired content at their convenience.

The intertwining of communication modes in electronic environments
adds new dimensions to information access.  Although more study
has been devoted to "active" than to "passive" information seeking,
even these categories are problematic in this new environment.
These are but a few of many communication definitions and concepts
being reconsidered in the light of new information technologies.

Task Independence and Task Dependence

The more intertwined tasks and activities become, the more difficult
it becomes to isolate any one task for study.  In the past, most
theory and research presumed that the human activities involved in
access to information could be isolated sufficiently to be studied
independently.  This is particularly true of information-seeking
behavior, a process often viewed as beginning when a person recognizes
the need for information and ending when the person acquires some
information resources that address the need.  Such a narrow view of
the process of seeking information simplifies the conduct of research.
For example, information seekers' activities can be studied from
the time they log onto an information retrieval system until they
log off with results in hand.  The process can be continued further
by following subsequent activities to determine which resources
discovered online were used, how, and for what purposes.  Another
approach is to constrain the scope of study to library-based
information seeking.  People can be interviewed when they first
enter a library building to identify their needs as they understood
them at that time.  Researchers can follow users around the building
(with permission, of course), and can interview the users again before
departure to determine what they learned or accomplished.

Narrowly bounded studies such as these provide insights into detailed
activities and are useful for evaluating specific systems, services,
and buildings.  However, their value and validity are declining for
the purposes of studying the information environment of today and
assessing the needs of the future.  In the early days of information
retrieval, people might reasonably conduct most or all of their
searching on one retrieval system.  Only a few systems existed, and
each had a limited number of databases.  These were complex systems
requiring lengthy training.  Information seekers, often with the
assistance of skilled searchers, would devote considerable effort to
constructing, executing, and iterating a search on a single system
(Borgman, Moghdam, and Corbett 1984).  A close analysis of user-system
interaction could provide a rich record of negotiating a single
search query.  Even so, such studies provide little insight into
the circumstances from which the information need arose or into
the relationship between a particular system and the use of other
information resources.

In today's environment, most people have access to a vast array of
online resources via the Internet and online resources provided by
libraries, archives, universities, businesses, and other organizations
with which they are affiliated, as well as print and other hard-copy
resources.  They are much less dependent on any single system or
database.  Rather, they are grazing through a vast array of resources,
perhaps "berry picking" (Bates 1989) from multiple sources and
systems.  Studying any individual system is far less likely to provide
a comprehensive view of information-seeking activities than it was
in the past.  Similarly, people have fewer reasons to spend time in
library buildings, now that they can use many library resources from
the convenience of home, office, dorm, coffee shop, or anywhere else
with network access.  And they can do so at hours of day or night
when library buildings normally are closed.  Thus, time spent in the
library building may be for narrower and more specific purposes, and
may occur only at critical stages in the search process.  The use of
library buildings also reflects patterns that are influenced by age,
generation, culture, discipline of study, and many other factors.
Such research should yield insights into the design of future
buildings and services, provided it is set in a larger context of
overall information-use patterns.

Future research on access to information must consider the complex
relationships between information-related activities and the context
of work and leisure practices in which these activities are conducted.
Although all scholarship is constrained by the necessity of studying
that which can be studied, particular caution is necessary when
studying tasks that tend to be interdependent.

Technology Adoption and Adaptation

Underlying the design of any information technology are assumptions
about how and why people will use it.  The assumptions are sometimes
explicit and sometimes only implicit, whether for individual
communication devices, for information systems, or for the design
of a global information infrastructure.  In identifying design
criteria, and making implicit assumptions explicit, many methods and
perspectives can be applied.  We can evaluate which prior technologies
were adopted and which were not, the processes by which they were
adopted, how similar technologies are used, what features and
functions are most popular and most effective, and how their users
adapt them to new purposes.

I will highlight three perspectives on assessing how and why people
use information technologies.  Though many other perspectives and
methods exist, these three are applicable to our concerns for access
to information.

Adoption

Of the vast number of information technologies that are invented,
only a few make it to the marketplace, and of these, even fewer are
successful.  The quality of the product is only one determinant of
market success.  Many products that receive critical acclaim fail to
garner large market shares.  The Beta video recording technology and
the Macintosh computer are the best-known examples.  In contrast, many
products whose reviews range from skepticism to scorn achieve great
market success.  Business factors such as timing, marketing, and
pricing are determinants of success.  Other determinants are social
factors involving how and why people choose to adopt any particular
innovation.  Rogers (1983, 1986) summarizes the results of a large
number of adoption studies using a five-stage model.  The first stage
of adoption is knowledge, or becoming aware of the existence of a new
technology that might be useful.  This stage is influenced by factors
such as previous practices, felt needs or problems, tendencies
toward being innovative, and norms of the individual's social system.
The second stage is persuasion, which in turn is influenced by the
perceived characteristics of the innovation, how well it might work,
how easy it is to try, and how easily the outcome can be observed.
In the third stage, the adopter makes a tentative decision to accept
or to reject the technology.  Acceptance may lead to implementation
(fourth stage) and, if the innovation is deemed sufficiently
useful, to a confirmation to continue its use (fifth stage).  If the
innovation is rejected, the individual still may revisit the decision
and adopt it later.

Electronic mail (email) provides an instructive example of the
adoption process.  A person may first become aware of its existence
through news reports or through discussions with friends, family, or
co-workers.  Someone surrounded by email users will hear about it more
quickly and frequently than someone whose acquaintances are nonusers.
Even today, elderly Americans who have minimal contact with computer
users may have at most a vague idea of what email is, for example.  In
countries with minimal telecommunications and computing penetration,
only the elite may be aware of email as a potentially useful
technology.  In the persuasion stage, a person who has many potential
email correspondents will find the technology more attractive than
a person who knows no one else with an email address.  Similarly,
a person who already owns a computer with a modem will find it far
easier to try email than one who must acquire the technology and the
skills to use it.  Once they have tried it, some people will find
email sufficiently useful, affordable, and worth the time and effort
to continue using it.  Others will not.  Thus, once people become
aware of email, only some will consider trying it, a smaller number
will make the effort to try it; of these, only some will acquire it
and continuing using it, and they may abandon it later.  Conversely,
some who rejected email at any of these adoption stages may consider
it again at some later time.

This adoption pattern also operates in the aggregate.  The "early
adopters" typically are risk takers who are willing to try unproven
techniques, often at great expense.  If they adopt the new technology,
their successes may convince more risk-averse individuals to try
it.  Conversely, if the early adopters reject it, others may be
more reluctant to try it.  By the time the low-risk late adopters
decide to implement a technology, the early adopters may have moved
on to something yet newer and more innovative.  Some technologies
reach a critical mass of adoption in a short period of time and
are great market successes.  Others are unable to find a match
with   early adopters fast enough, and the entrepreneurs fail
before finding their niche in the market.  Others fail because they
do not fill a perceived need.  Yet others succeed because they are
good enough, cheap enough, and at the right place at the right time,
although not necessarily an optimal design.  Though this explanation
is a gross simplification of the adoption process, it illustrates a
few of the many social variables that influence the success of new
information technologies.

Again, email provides a useful case example.  Email filled a perceived
need early in the development of computer networks and reached a
critical mass of computer users fairly quickly.  Spreadsheets were
a similarly attractive technology that contributed to the adoption
of personal computers.  Early adopters of both technologies were
sophisticated computer users who tolerated complex user interfaces,
often unreliable software, and minimal functionality because the
technology was sufficiently valuable for their purposes.  People who
are early adopters of one technology tend to be early adopters of
others, willing to tolerate immature technologies in return for their
benefits, and often enjoy the challenge of working at the "bleeding
edge" of technical frontiers.

Conversely, late adopters of one technology tend to be late adopters
of others.  These people are far less likely to appreciate technology
for its own sake, preferring mature, easy-to-use technologies with
a high perceived payoff relative to the effort required in learning
to use them.  They are happy to let others "work the bugs out" before
spending the time, effort, and money to adopt them.  This distinction
between the personality characteristics and social context of early
and late adopters is an important one to bear in mind when considering
technologies intended for a mass market.  If a global information
infrastructure is to achieve wide acceptance, it must be attractive to
late adopters.

Adaptation

Theories of diffusion and adoption are valuable in understanding
the social processes involved in choosing to employ a particular
technology.  The "diffusion of innovations" theory originated in rural
sociology to explain farmers' choices of agricultural innovations
such as farming equipment, hybrid plants, pesticides, and techniques
for planting, harvesting, and storing crops.  The theory was later
extended to study the adoption of a diverse array of innovations
including solar energy during a fossil-fuels shortage and family
planning methods in developing countries.  One weakness of applying
the "diffusion of innovations" theory to information technologies
is the implicit assumption that the innovation is relatively static.
Information technologies tend to be more dynamic and flexible than
farming equipment, for example.  Any communication device may be
short-lived, making it difficult to compare the actions of someone who
adopted the first crude implementation to those of someone who adopted
a more sophisticated and less expensive version only months later.
Moreover, information technologies are more malleable and adaptable to
individual purposes than are most other technologies.  Thus, we must
look not just at the adoption of information technologies as a binary
(adopt / not adopt) decision, but also at how technologies, once
adopted, are adapted over time.

Books provide an early example of how people adapt information
technologies to their purposes.  Manuscripts (meaning, literally,
hand-written) were the first form of written record.  Manuscripts on
sheepskin or parchment were easier to create and read than chiseled
stone tablets, but still could be read only by one person in one place
at a time.  Manuscripts could be loaned for manual copying, which
enabled duplication, however laborious.  Gutenberg's improvements
in movable type in the fifteenth century made multiple copies
economically feasible for the first time.  Early printed books
retained the shape and size of manuscripts, following the earlier
technology.  Although the distribution of multiple copies meant that
more people could own and read a work concurrently, books still were
too bulky for portable use, except by the very rich.  Greenberg (1998)
recounts the oft-told story of Abdul Kassem Ismael, who was said to
have had a library of 117,000 books in tenth-century Persia.  Not
only did he carry his library with him while he traveled, on the backs
of 400 camels, he trained the camels to walk in alphabetical order.
Later innovations led to publishing books in more portable sizes that
fit not only in the saddlebags of yesteryear, but in the backpacks and
briefcases of today.

We find similar adaptations in the use of computer networks.  The
ARPANET, precursor to the Internet, was created for remote access to
scarce computing resources.  Electronic mail was a feature intended to
serve as an ancillary communication function.  Email proved so useful
for general communication that it became the dominant use of the
network, much to the surprise of the ARPANET's designers (Licklider
and Vezza 1978; Quarterman 1990).  Email was the "killer application"
that attracted most people to the Internet (Anderson et al. 1995;
Quarterman 1990), and it remains the most important reason for becoming
an Internet user (Katz and Aspden 1997).

Email is a far different application today than it was in the early
days of the ARPANET, however.  Early email consisted of very short
plain text messages.  Less than a decade ago, messages could take
several days to arrive, with delays caused whenever a server in a
store-and-forward network went down.  Email was neither fast enough,
reliable enough, nor functional enough to replace most other forms of
communication.  The technology advanced, as did users' perceived needs
for more capabilities and better services.  Today's email supports
long messages of formatted text and is fast, reliable, convenient,
and inexpensive (Berghel 1997b).  Increasingly, email software
allows people to send and receive file attachments that preserve the
integrity of text, images, graphics, and sound.  For many purposes,
email is a suitable substitute for telephone, fax, post, or express
mail.

Email now combines the features of word processors, file transfer
(ftp), and multimedia file management.  It also provides a bridge to
the World Wide Web by embedding live links to web sites.  By including
a URL (uniform resource locator) address in an email message, a user
can click on an address to launch a browser application and link to
the web site.  And the reverse is true.  Once at the web site, a user
can click on "email" and send a message to the web site.

Email has evolved from a simple application to one that combines
a rich array of services.  As users realized its value and its
constraints, they identified further improvements that could be made.
Yet today's complex email technology has too much functionality to be
feasible for some purposes.  Thus, we also find evidence of complex
applications being stripped down to the bare elements that suit newly
identified needs.  An example is the convergence of email with pocket
pagers, which themselves were initially a simple, single-function
technology.  Some of today's more elaborate pagers include a full,
albeit tiny, QWERTY keyboard and alphanumeric display, on which people
can send and receive terse messages.  Other pagers include function
keys for common responses to email-type messages: yes, no, time, date,
etc.  Such devices can convey cryptic but critical messages, such as
"When do you arrive?" (answer: "AA 75, 8:44pm LAX"), "Did we win the
case?", "Running late, resched Tu at 3?" (answer: "no.  Tu 2pm ok?")",
"pls get milk", or "get KT @ school". 

These are but a few examples of how people adapt information
technologies by using them.  People sometimes adopt only part of
a technology, as illustrated by the example of stripped-down email.
Other times they disable or circumvent features of a technology.
Email file attachments are a case in point.  They are extremely useful
for exchanging files quickly between team members, co-authors, authors
and editors, authors or publishers and readers, or teachers and
students.  But they are useful only when they work.  When exchange
partners have identical hardware and software platforms, fast
connections, and (better yet) the ability to scan for viruses before
receipt, file exchange may be seamless.

System designers, along with as those who send file attachments, often
are unaware of the difficulties involved in receiving attachments
intact and in a usable form, however.  Despite considerable progress,
the necessary platform independence and software independence required
for reliable exchange of attachments over networks has yet to be
achieved.  File exchanges between different platforms (e.g., PC and
Macintosh) and different operating systems (Windows 95, Windows 98,
Windows NT, Macintosh OS 7.5, Macintosh OS 8.0, Unix, etc.) introduce
compatibility problems.  Files created with widely used word
processing software such as Microsoft Word and Corel WordPerfect often
fail to transfer intact.  Text may transfer but formatting may be
corrupted, and the likelihood of accurate transfer decreases with the
inclusion of software-specific features such as tables, graphics, and
macros.  The more recent the version of the software used to create a
file, the less likely that earlier versions of the same software or of
competing software can open it intact.  Exchanging files of graphics
or sound is yet more problematic.  Adding another layer of concern
is the ability of attachments to carry computer viruses that can
contaminate the receiver's computer.

Unsolicited file attachments containing job applications,
advertisements, jokes, cartoons, greeting cards, and myriad other
materials clog network and modem lines and fill disk space.  Owing
to problems with technical compatibility, viruses, and bandwidth,
many people are making minimal use of file attachments, and some
are setting their email parameters to reject them entirely.  Local
network managers are introducing delays in email delivery to scan all
attachments for viruses, adding another layer of complexity.  Sending
faxes, or mailing paper and disks, can be faster, more reliable, and
less labor intensive.

The email examples offer several lessons in the adoption and
adaptation of information technologies.  One lesson is that early
adopters are willing to use an immature technology.  As they use it,
they will identify problems, recognize new possibilities, and demand
improvements.  Later adopters will identify yet more problems and
more desirable capabilities as they integrate it into their practices,
refining the technology further.  Another lesson is that one simple
technology may spawn so many features that it subdivides into
component parts, as email has done.  We also see that advanced
features that are extremely useful in some situations may result in
unintended and undesirable consequences in others, as is the present
case with file attachments.  When people have positive experiences
with a technology, they often are more inclined to adopt another
technology.  Conversely, when they have negative experiences, they
trust the technology less than before, and are less inclined to try
something new.  All these lessons argue for the importance of studying
the use of information technologies in actual working situations.
Though laboratory experiments are extremely valuable for improving
technologies under ideal conditions, field studies are essential to
determine how technologies are adopted and adapted.

Organizational Adaptation

Though some technology adoption and adaptation is attributable to
individual choices by individual users, much of it takes place in
the context of organizations.  Organizations such as businesses,
governments, universities, and schools make decisions about what
hardware, software, and services to purchase for use by their
constituencies.  Individuals may have little choice in which computing
platform, Internet provider, or services they use.  Organizations
usually set policies about how services such as email and information
resources are used.  Even in view of these constraints, individuals
often have considerable latitude in how they employ these technologies
in their work practices, however.

Sproull and Kiesler (1991) explain the unpredictable effects
of introducing technology into organizations from a "two-level
perspective".  They argue that most inventors and early adopters
of technology think primarily about efficiency of the technology.
System designers, as well as early adopters, focus on the instrumental
uses to which the technology is put, whether reducing "telephone tag"
through the use of electronic mail or lowering secretarial costs by
replacing typing with word processing.  These are the "first-level
effects" of a technology.

Users rarely implement a new technology in precisely the way that
designers intend, however.  Organizations find it difficult to
determine accurate estimates of direct costs, much less to determine
the first-level effects of technology on work practices, productivity,
or profits. Because technologies interact with routine work practices
and policies, implementation leads to "long-term changes in how people
work, treat one another, and structure their organizations" (Sproull
and Kiesler 1991, p. 1).  It is these "second-level effects" on the
social system of interdependent people, events, and behaviors that are
most pervasive and most important for organizations.  These effects
are also the most difficult to predict.

Again, email offers illustrations of first- and second-level effects
of introducing an information technology into organizations.  The
instrumental uses of email are many: it offers rapid interpersonal
communication within the organization and between the organization and
the external world, whether clients, suppliers, members, customers,
citizens, colleagues, friends, or family.  Email is convenient and
portable.  Because it is asynchronous, it can improve time management
by enabling people to send and receive messages at their convenience.
It serves as a broadcast technology, allowing an organization
to deliver the same message to a mass audience of its employees,
students, or other groups simultaneously.  Email has radically
increased the speed and volume of communication for most people who
use it.

We are finding many second-level effects of email that were not
anticipated at the time of its initial development or adoption.
Email is easily abused, whether by broadcasting messages that are of
interest only to a few or by sending rude and inappropriate messages
that are unlikely to be communicated by other means.  Junk email
can proliferate, resulting in inefficient use of staff time to sort
through it, rather than the efficiency of communication intended.
Once an organization adopts email, usually everyone who is provided
access is expected to use it regularly.  People are expected to
respond to messages, and to do so quickly.  As a result, memos and
other communications that did not require a response in paper form now
result in a flurry of acknowledgments and responses, adding another
layer of communication activity.

Communications that once were oral, or confined to one or a few
paper copies that were controlled by the individuals involved, are
now captured in permanent form on an organization's email servers.
As a result, organizations are faced with a difficult balance between
controlling their resources and the rights of individuals to their
privacy (Anderson et al. 1995; Berghel 1997b).  Organizations that
read employees' email may defend this practice on the grounds that
email is organizational documentation and that it resides on computers
owned by the organization.  Individuals, particularly those who have
lost jobs over the content of email messages, may contend that email
is the equivalent of telephone or other oral communications and is
subject to reasonable expectations of privacy.

Conversely, organizations are learning that email can have unexpected
and adverse legal consequences.  Conversations that once were oral and
now are recorded can be treated as legal evidence.  Among the evidence
that convicted Oliver North in the Iran-Contra affair were email
messages that he had deleted; they were recovered from backup storage
as part of the legal discovery process.  Similarly, email messages
internal to the Microsoft Corporation are being used by the US
government as evidence in an antitrust case against the corporation.
As a result of these and other cases, many organizations are expanding
the scope of their email policies to limit the content of email
messages and to minimize the archival storage of email transactions
(Harmon 1998).

These are only a few of many examples of the positive and negative
effects that email has had on organizational communication.
(For more, see Anderson et al. 1995; Berghel 1997b; Markus 1994.)
People's experiences with email and their perceptions of its role in
an organization combine to determine how they will adapt it to their
own practices.

As information technologies are more widely adopted, concern about
their second-level effects is increasing.  These concerns cross
many disciplines, levels of analysis, and research methods.  "Social
informatics" is an emerging research area that brings together the
concerns of information, computer, and social scientists with those
in the domains of study (Bishop and Star 1996; Borgman et al. 1996;
Bowker et al. 1996).  Social informatics scholars are attempting to
build upon research in the design and the use of information systems
and upon social studies of science and technology.  This book brings
a social informatics perspective to bear on access to information
in digital libraries and in a global information infrastructure,
considering first-level effects when these are all that can be known
and second-level effects where possible.

Creating a Global Information Infrastructure

The integration, interaction, and interdependence of information-
related tasks and activities leads us to think in terms of an
information infrastructure.  Rather than relying on separate devices
for producing text (e.g., typewriters and personal computers),
producing images (e.g., personal computers, photocopy machines,
drawing pads), communicating with individuals (e.g., telephones,
telefacsimile (fax) machines, mailboxes and stamps), and searching for
information resources (e.g., personal computers, local servers, print
technologies), all these tasks can be accomplished via a personal
computer connected to the Internet.  Conversely, these tasks can be
divided up in many new ways by means of specialized devices such as
cell phones, pagers, palmtops, and other "information appliances" that
can share information.  Computer and communication networks enable the
integration of tasks and activities involved in creating, seeking, and
using information, increase the interaction between these activities,
and make them ever more interdependent.

In considering the premise and the promise of a "global information
infrastructure", we must determine what is meant by this phrase.
Already it is used in variety of contexts, with meanings that include
a set of technologies, a set of principles for an international
computing and communications network, and a loose aggregation of
people, technology, and content.

What Is Infrastructure?

Terms such as "national information infrastructure" and "global
information infrastructure" are being bandied about with minimal
discussion of what is meant by "infrastructure".  Social scientists
and historians are beginning to take a research interest in this
concept, particularly as it relates to organizational communication
and work practices.  Star and Ruhleder (1996, p. 111-112) describe
infrastructure as follows:

  It is both engine and barrier for change; both customizable
  and rigid; both inside and outside organizational practices.
  It is product and process.  ...  With the rise of decentralized
  technologies used across wide geographical distance, both the
  need for common standards and the need for situated, tailorable
  and flexible technologies grow stronger.

Star and Ruhleder are among the first to describe infrastructure
as a social and technical construct.  Their eight dimensions (ibid.,
p. 113) can be paraphrased as follows: An infrastructure is embedded
in other structures, social arrangements, and technologies.  It is
transparent, in that it invisibly supports tasks.  Its reach or scope
may be spatial or temporal, in that it reaches beyond a single event
or a single site of practice.  Infrastructure is learned as part of
membership of an organization or group.  It is linked with conventions
of practice of day-to-day work.  Infrastructure is the embodiment of
standards, so that other tools and infrastructures can interconnect in
a standardized way.  It builds upon an installed base, inheriting both
strengths and limitations from that base.  And infrastructure becomes
visible upon breakdown, in that we are most aware of it when it fails
to work-when the server is down, the electrical power grid fails, or
the highway bridge collapses.

As a means to explore the technical and public policy implications
of information infrastructure, the Corporation for National Research
Initiatives has sponsored a series of studies that address historical
examples of large-scale infrastructure.  These include studies of
the growth of railroads, telephony and telegraphy, electricity and
light, and banking (Friedlander 1995a,b, 1996a,b).  In each case,
the technologies involved took some time to be adopted, to stabilize,
and to achieve the critical mass necessary to form an infrastructure.
Railroads, telephones, power companies, and banks all provided local
services for years, or even decades, before reaching nationwide
connectivity.  Each developed with some combination of public and
private investment and government regulation.  The means by which
an integrated infrastructure evolved varied, and each involved
experimentation with different forms of technology, regulation, and
social arrangements.

Models of infrastructure for railroads, telephones, energy,
and banking could have taken far different forms than they did.
Indeed, with the possible exception of railroads, each of these
infrastructures is still evolving actively.  Telephony underwent
extensive restructuring in the United States during the 1980s
and the 1990s due to changes in regulatory structure, mergers
and acquisitions, and technological advances.  Similar regulatory
restructuring is now underway in Europe and elsewhere.  Meanwhile,
technology advances and mergers and acquisitions continue apace.  On
the energy front, models for service provision are changing as energy
companies are privatized and global power relationships shift with
variations in supplies and prices of fossil fuels.  On the financial
front, models for banking infrastructure are under scrutiny as markets
for stocks, commodities, currencies, and other financial instruments
are becoming much more tightly coupled.

Each of these infrastructures is deeply embedded in our social fabric,
relies on technical standards, and builds upon an installed base
within the scope of its own and other infrastructures.  A corollary to
the notion that infrastructure becomes visible upon breakdown is that
we rarely are aware of it when it is functioning adequately.  We often
fail to recognize these as essential infrastructures until telephone
service becomes more complex and expensive, energy services change
in cost and character, or the stock market takes a precipitous fall
in value.  And, although Americans make minimal use of railroads,
railroads are an essential form of transportation in much of the
world, where people are very much aware of changes in schedules,
routes, prices, and services.

Star and Ruhleder's (1996) set of eight infrastructure dimensions
highlights the complex interaction of technology, social and work
practices, and standards.  They also emphasize social context
by noting that infrastructure builds upon an installed base.
An information infrastructure is built upon an installed base of
telecommunications lines, electrical power grids, and computing
technology, as well as on available information resources,
organizational arrangements, and people's practices in using all these
aspects.  An installed base establishes a set of capabilities and a
set of constraints that influence future developments.  For example,
mobile telecommunications must interoperate with land-based networks,
and new computers should be able to read files that were created on
the preceding generation of technology.

The concepts of embeddedness, transparency, and visibility are
especially relevant to a discussion of a global information
infrastructure.  To be effective, a GII must be embedded in the
technical and social infrastructure of the many nations and cultures
it reaches-so much so that the infrastructure is invisible most of
the time.  Whether this degree of embeddedness is possible across
countries and cultures is examined throughout this book.  When
an information infrastructure works well, people depend on it for
critical work, education, and leisure tasks, taking its reliability
for granted.  When it breaks down (for example, when email cannot
be sent or received, when transferred files cannot be read, or when
online information stores cannot be reached), then the information
infrastructure becomes very visible.  People may resort to alternative
means to complete the task, if those means exist; they may create
redundant systems at considerable effort and expense; and they will
trust the infrastructure a bit less each time it breaks down.

Infrastructure as Public Policy

Infrastructures of many kinds are subject to public policy.  For
example, the Clinton administration (1997, 1998) set forth a policy
on "critical infrastructure protection" that is noteworthy for our
concerns.  The white paper on Presidential Decision Directive 63
(Clinton Administration 1998) defines "critical infrastructures"
as "those physical and cyber-based systems essential to the minimum
operations of the economy and government.  They include, but are
not limited to, telecommunications, energy, banking and finance,
transportation, water systems, and emergency services, both
governmental and private".  In the past, these infrastructures were
physically and functionally separate.  However, with advances in
information technology these systems are increasingly linked and
interdependent.  The significance of this interdependence is that
critical systems are ever more vulnerable to "equipment failures,
human error, weather and other natural causes, and physical and cyber
attacks".  PDD 63 has the goal of protecting critical infrastructure
from intentional attack and minimizing service disruptions due to any
other form of failure.

Information technologies link these critical infrastructures, making
them interdependent, and thus all information technologies could
be considered parts of an information infrastructure.  Information
infrastructure usually is more narrowly defined in public policy
documents, however.  Typically the scope includes computing and
communications networks, associated information resources, and perhaps
a set of regulations and policies governing use.

Metaphors for Information Infrastructure

Clever metaphors for information infrastructure have helped to capture
public attention.  The concept of information infrastructure is best
known in common parlance as the "information superhighway" (Gore
1994b), or sometimes as the "I-way" or the "Infobahn".  These metaphors
for information infrastructure emphasize the roads or pipes over
which data flow, whether telecommunications, broadcast, cable, or
other channels.  The highway metaphor captures only a narrow sense
of infrastructure, as it does not encompass information content,
communication processes, or the larger social, political, and economic
context.  The superhighway metaphor is misleading both because it
skews public understanding toward a low-level infrastructure and
because it suggests that the government would pay the direct costs
of the highway's construction.  The Internet was constructed with a
combination of government and private funds.  Current public policy,
especially in the United States, is oriented toward private funding
for further expansion (Branscomb and Kahin 1995; Kahin and Abbate
1995; Kahin and Keller 1995).

Though metaphors such as the information superhighway have been
extremely effective in marshalling support for information
infrastructure development, far more is involved than laying roads
over which information will travel.

National and International Policies

Individual countries began plans for national information
infrastructures in the early 1990s (see, e.g., Information
Infrastructure Program 1992; Karnitas 1996).  In the United States,
there was the National Information Infrastructure Act of 1993.
In Europe, there was the European Union's proposal for a European
Information Infrastructure (Bangemann Report 1994).  The installed
base of technology on which these plans are predicated includes the
Internet, which began in the late 1960s with the ARPANET (National
Research Council 1994; Quarterman 1990), the "intelligent network"
of telecommunications that followed the deregulation of telephony
(Mansell 1993), and related technologies such as cable and satellite
television networks.

In the mid 1990s, national information infrastructure plans began
to converge.  In 1994 the United States proposed formal principles
for a global information infrastructure.  The following principles
were incorporated into the International Telecommunication Union's
"Buenos Aires Declaration on Global Telecommunication Development for
the 21st Century" (1994) and the United States' "Global Information
Infrastructure: Agenda for Cooperation" (Brown et al. 1995):

  o encouraging private sector investmento promoting open competition

  o providing open access to the network for all information providers
  and userso creating a flexible regulatory environment that can keep
  pace with rapid technological and market changeso ensuring universal
  service.

A few months later, the Group of Seven (seven leading industrialized
nations, known as "G-7") met to discuss these principles and agreed to
collaborate "to realize their common vision of the Global Information
Society" and to work cooperatively to construct a global information
infrastructure (G-7 Ministerial Conference on the Information Society
1995a, pp. 1-2).  These principles emerged from the 1995 G-7 meeting:

  o promoting dynamic competitiono encouraging private investment

  o defining an adaptable regulatory frameworko providing open access
  to networks whileo promoting equality of opportunity to the citizeno
  promoting diversity of content, including cultural and linguistic
  diversity

  o recognizing the necessity of worldwide cooperation with particular
  attention to less developed countries.

The G-7 document also included the following.

  These principles will apply to the Global Information Infrastructure
  by means of:

  o promotion of interconnectivity and interoperabilityo developing
  global markets for networks, services, and applicationso ensuring
  privacy and data security

  o protecting intellectual property rights

  o cooperating in R&D and in the development of new applications

  o monitoring the social and societal implications of the information
  society.

The Buenos Aires and G-7 statements have much in common: they
are concerned with technical capabilities ("interconnectivity",
"interoperability", "open access"), promises of rights to provide
network services ("open competition", "dynamic competition"),
guarantees of network services ("universal service", "equality of
opportunity"), a means of funding network development ("encouraging
private investment"), and a means of regulating various aspects of
its development and use ("flexible regulatory environment", "adaptable
regulatory framework").  However, they vary on their treatment of
content: the G-7 principles promote diversity of content and offer
some general protections ("privacy", "data security", "intellectual
property"), while the telecommunications principles do not
mention content, addressing only the development and regulation of
communication channels.

Implementing Global Policy

Statements by the G-7 and other multinational bodies such as the
United Nations promote policy agendas of the countries involved, but
they lack the force of law and they provide little if any funding
for implementation.  Some of the language offers more platitudes than
policy, such as the claim in the European Information Infrastructure
plan that, "as a strategic creation for the whole Union", it will lead
to "a more caring European society with a significantly higher quality
of life" (Bangemann Report 1994).

The G-7 policy statements that frame a global information
infrastructure have raised considerable concern about human
rights and social protections from adverse consequences of its use.
Though the G-7 principles include a general statement about privacy
and comment on the need to monitor the social implications of the
information society, they do not ensure legal protection of rights
such as privacy, free expression, and access to information.  Despite
requests by human rights groups, the G-7 principles omit references to
assurances in the United Nations Declaration of Human Rights that were
approved in 1948 (see United Nations 1998).  Particularly relevant are
articles 12 and 19:

  Article 12: No one shall be subjected to arbitrary interference with
  his privacy, family, home or correspondence, nor to attacks upon his
  honor and reputation.  Everyone has the right to the protection of
  the law against such interference or attacks.

  Article 19: Everyone has the right to freedom of opinion and
  expression; this right includes freedom to hold opinions without
  interference and to seek, receive and impart information and ideas
  through any media and regardless of frontiers.

These principles are receiving renewed attention upon the fiftieth
anniversary of their adoption (United Nations 1998).  Computer
networks offer unanticipated capabilities for free speech and access
to information.  Because transactions and interactions are easily
trackable, computer networks also can create unanticipated intrusions
into privacy (Kang 1998).  Many privacy advocates promote an
alternative design model, known as "privacy-enhancing technologies"
(Burkert 1997), in which individuals can acquire access to most
information services without revealing their identity if they so
choose.  Privacy, freedom of speech, and freedom of access to
information are tenets of democracy (Dervin 1994; Lievrouw 1994a,b).
People cannot speak freely or seek information freely if their
movements are being tracked and if they cannot protect and control
data about themselves (Agre and Rotenberg 1997; Diffie and Landau
1998; Information Freedom and Censorship 1988, 1991).

These are contentious issues in the United States.  One example
is that the federal policy on critical infrastructure protection,
discussed above, is being challenged on the basis of its potential to
erode civil liberties (Electronic Privacy Information Center 1998).
Public policy on social aspects of information infrastructure is
subject to the laws, the norms, and the practices of individual
countries and jurisdictions, despite the global reach of computer
networks.  When local activities took place only locally, variances
in policy and regulation were less apparent and jurisdiction was
rarely an issue.  Now that individual communications and information
resources flow quickly and in vast quantities across borders,
variances in policy and regulation can be highly visible and
jurisdiction can be highly contentious.  Privacy rights and
regulations have become an international battlefield where many of
these issues are being played out.

The European Union Data Directive, which took effect in late 1998,
highlights fundamental differences in policy approaches to privacy
protection.  The United States long has taken a "sector approach",
with specific laws governing credit reports, library borrowing
records, videotape rentals, federal government databases, etc.  In the
new arena of computer networks, US policy has favored self-regulation
by the private sector over government-imposed regulation.  In
contrast, European countries have favored generalized policies over
the control of personal data, assigning stronger rights to individuals
to control information about themselves than to organizations that
collect and manage personal data.  The EU Data Directive consolidates
the policies of individual countries and regulates privacy protections
throughout the European Union.  In view of the extensive commerce
between the United States and the European Union and the volumes
of data about personnel, customers, clients, and suppliers that are
subject to regulation, the policies of these jurisdictions often are
in conflict.

For overviews of the rapidly evolving landscape of electronic privacy,
see Agre and Rotenberg 1997, Diffie and Landau 1998, Kang 1998,
Rotenberg 1998, and Schneier and Banisar 1997.  Updates, including
pointers to government documents and other primary sources, can be
found at http://www.privacy.org and at http://www.epic.org.

Information Infrastructure as a Technical Framework

"Information infrastructure" can refer to a technical framework rather
than to a public policy.  As defined by the (US) National Research
Council (1994, p. 22), an information infrastructure is "a framework
in which communications networks support higher-level services for
human communication and access to information.  Such an infrastructure
has an architectural aspect-a structure and design-that is manifested
in standard interfaces and in standard objects (voice, video, files,
email, and so on) transmitted over the interfaces". 

One of the key components in defining an information infrastructure
as a technical framework is for it to have an open architecture that
will enable all parties to interconnect electronically and to exchange
data.  The "Open Data Network" concept (National Research Council
1994) follows both from the Internet (a successful open architecture
for computing) and from established telecommunications policy
principles (Mansell 1993; National Research Council 1994).  Under
the G-7 principles, closed networks can interconnect with the open
network; closed service networks such as cable television are allowed
under other communications regulations as well.  As we move toward
ubiquitous computing, a wider array of devices must interconnect; this
makes open systems and interoperability much more essential.

The emerging global network that interconnects a wide variety of
computing devices located around the world offers great utility
for communication between individuals and organizations, whether
for education, work, leisure, or commerce.  The technical framework
for such an information infrastructure is now expected to support
a range of tasks and activities far wider than that for which it
was originally designed, however.  The original ARPANET and the early
generations of the Internet were constructed by and for the research,
development, and education communities (Quarterman 1990).  Benign uses
by a collegial community were presumed when its technical architecture
was designed (Oppliger 1997).

Substantial enhancements are being made to the technical architecture
of the Internet to support a vastly larger volume and variety
of users, capabilities, and services than was anticipated in the
original design.  Two new network services illustrate the scope
of the improvements that are under way (Lawton 1998; Lynch 1998).
One is "quality of service": the ability to reserve a set amount of
bandwidth, at a predetermined level of quality, in advance.  Rather
than the current model, which is largely "first come, first served"
for bandwidth usage, mostly at flat pricing, the new model supports
differential pricing for differential services.  Many organizations
are willing to pay a premium to guarantee adequate bandwidth at a
specified time (for a teleconference or a distance-education course,
for example).  Conversely, many individuals are willing to tolerate
delays in email delivery or Web access in return for lower costs.
In view of the complexity of Internet architecture and the number
of political and service-provider boundaries crossed by an individual
transmission, guaranteeing quality of service will not be a simple
accomplishment.  Though quality of service is considered an essential
capability of an information infrastructure, precise assessments
of what can be guaranteed and how it can be measured have yet to be
established (Lynch 1998).

Multicasting is another long-awaited service improvement for the
technical framework of a global information infrastructure.  At
present, most communications are point-to-point ("unicasting"): copies
of a message are sent individually to each intended recipient.  The
alternative is broadcasting, in which one message is sent to all users
of the network, whether they want it or not.  An intermediate model is
"multicasting": one message is sent to a selected group of recipients,
reducing the amount of bandwidth required.  Technically, under
multicasting, the originating server sends one message to each network
router on which intended recipients are located and that router
re-sends to its local subscribers (Lawton 1998).  As with quality of
service, the number of providers involved makes multicasting a complex
process, but one that is necessary for efficient use of bandwidth
on a global information infrastructure (Lynch 1998).  A variety of
economic and technical models for network service provision are under
consideration for the next generation of network architecture (Shapiro
and Varian 1998).

The Internet is already a "network of networks".  A global information
infrastructure will be even more so.  Though we speak metaphorically
of a single open network, in actuality the Internet links many layers
of networks within organizations, within local geographic areas,
within countries, and within larger geographical regions.  These go
by various names, including intranets, extranets, local-area networks
(LANs), metropolitan-area networks (MANs), and even tiny-area networks
(TANs).  Suffice it to say that the information infrastructure
topography is becoming increasingly complex, linking together internal
organizational networks, closed networks such as cable TV, and the
international Internet.

The boundaries of individual networks can be controlled to varying
degrees.  A common technique is to protect organizational or even
national networks with "firewalls" that limit the abilities of
authorized users to exit and of outsiders to enter.  Some internal
resources can be publicly accessible while others are restricted
to internal use, for example.  Similarly, firewalls and filtering
techniques can be used to limit external sites that can be reached.
Parents can limit their children's ability to connect to sites known
to contain pornography or other undesirable material.  The definition
of "undesirable" varies by context.  Companies can limit access to
known sites containing games.  Countries can limit access to sites
known to provide undesirable political views.  China, for example,
currently attempts to control access to sites outside the country
through a single gateway, so that specific sites deemed objectionable
can be blocked.  Chinese Internet users are required to register
with the police to gain access to the network (Tan, Mueller, and
Foster 1997).  A key phrase here is "known sites".  As the Internet
proliferates, new sites appear daily, and sites change names,
location, and content frequently.  Reliable filtering software that
can distinguish between acceptable and unacceptable materials is not
yet feasible, and may never be.

For most businesses and governments, security and risk management are
far greater concerns than is pornography.  After connectivity, the
most important enabling technology for electronic commerce is security
(Dam and Lin 1996; Geer 1998; Oppliger 1997).  One model being studied
and implemented is "trust management", in which mechanisms such as
cryptography are employed to verify the identities of all parties
involved in electronic transactions.  Such transactions include
buying and selling goods or services, transferring secure data
(such as financial transactions between banks and stock markets),
and proprietary communications within organizations or between
organizations and their clients, customers, and suppliers.  Both
retail transactions between individuals and companies and wholesale
transactions between companies can be accommodated.  An alternative
model is "risk management", which focuses on the likelihood of losses
and the size of potential losses from electronic commerce.  Rather
than assume that trust can be guaranteed in all transactions, parties
try to determine the degree of risk exposure and to insure against it.
Cryptography is essential to both models as a means of assuring the
authenticity of transactions to the extent possible.  The frontiers
of electronic commerce are being tested in the financial markets
today.  In view of the size and volume of transactions among banks,
stock markets, investors, and other parties, many technical and policy
aspects of information infrastructure are likely to be tested first in
this arena (Geer 1998).

Information Infrastructure as Technology, People, and Content

Among the broadest conceptualizations of an information infrastructure
is that presented in National Information Infrastructure: Agenda
for Action 1993, where an NII is defined as encompassing a nation's
networks, computers, software, information resources, developers, and
producers.  This definition comes closer to capturing the larger sense
of infrastructure as a complex set of interactions between people
and technology than do most other public policy statements, technical
definitions, or metaphors.

The above definition is compelling, if vague, because it recognizes
that we are creating something new and something that is more than a
sum of its parts.  The information infrastructure is not a substitute
for telephone, broadcast, or cable networks, for computer systems,
for libraries, archives, or museums, for schools and universities,
for banks, or for governments.  Rather, it is a new entity that
incorporates and supplements all these technologies and institutions
but is not likely to replace any of them.  However, a GII is likely
to change each of these institutions, and how people use them, in
profound ways.

The term "global information infrastructure" is used in this broad
sense throughout the present book.  A GII consists of a technical
framework of computing and communications technologies, information
content, services, and people, all of which interact in complex and
often unpredictable ways.  No single entity owns, manages, or controls
the technical framework of a GII, although many governments, vast
numbers of public and private organizations, and millions of people
contribute to it and use it.  The GII is perhaps best understood
by the metaphor of the elephant being examined by a group of blind
people-each one touches a different part of the beast, and thus senses
a different entity.  From this perspective, a global information
infrastructure is a means for access to information.  However, it can
be viewed from many complementary perspectives that also are valid.

Summary

These are exciting times.  Information technologies are increasing in
speed, power, and sophistication, and they now can link together a
vast array of devices into a network that spans the globe.  They offer
new ways of learning, working, and playing, as well as conducting
global commerce.  Some contend that these changes are revolutionary
and will change the world; others argue that the changes are
evolutionary, and that individuals and organizations will incorporate
networked information technologies into their practices just as they
incorporated many earlier media and technologies.  In this book I take
the view that these changes are neither revolutionary nor evolutionary
but somewhere between: that they are co-evolutionary.  New
technologies are based on perceived needs and available capabilities.
People adopt these new technologies if and when they deem the
technologies useful and when they deem the effort and the costs
appropriate.  Sometimes individuals make these decisions; sometimes
organizations make them.  The result is that some technologies
are adopted by some of the people some of the time.  No matter
how voluntary or involuntary the adoption process, individuals and
organizations adapt technologies to their interests and practices,
often in ways not anticipated by the designers of those technologies.
Information technologies are more flexible and malleable to individual
practices than are most other innovations, and this makes them
especially adaptable.  They also evolve more quickly than most other
innovations, with new and improved versions appearing at a dizzying
rate.

Adoption and adaptation of technology are difficult to predict, owing
to the complex interactions between characteristics of information
technologies, practices of individuals and organizations, economics,
public policy, local cultures, and a host of other factors.
Organizations acquiring new technologies find that estimates of
first-level effects, such as those on productivity and profits,
are unreliable.  Reliable predictions of longer-term, second-level
effects, such as those on organizational communication and structure,
are nearly impossible.  One reason is that external factors, such as
changes in the legal status of electronic communications, can have
profound effects on how individuals and organizations use information
technologies.

We are in the process of creating a global information infrastructure
that will interconnect computer networks and various forms of
information technologies around the world.  After a review of some of
the many meanings of "information infrastructure", it was determined
that the concept incorporates people, technology, and content and
the interactions between them.  This broad definition incorporates
definitions of information infrastructure as a set of public policies
and as a technical framework.  The broader definition is best suited
to studying the co-evolution of technology and behavior as related
to access to information, which is the primary concern of this book.
An information infrastructure is only one of several infrastructures
that are essential to a well-functioning society.  Others include
energy, transportation, telecommunications, banking and finance,
transportation, water systems, and emergency services.  Because
each of these infrastructures is increasingly reliant on information
technologies, they are more interconnected and interdependent.  Their
interdependence means that more and more aspects of daily life depend
on the emerging global information infrastructure.

end


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
June 2022
May 2022
March 2022
February 2022
October 2021
July 2021
June 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
July 2020
June 2020
May 2020
April 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager