MUSEUM-L Archives

Museum discussion list

MUSEUM-L@HOME.EASE.LSOFT.COM

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Eric Siegel <[log in to unmask]>
Reply To:
Museum discussion list <[log in to unmask]>
Date:
Wed, 28 Sep 1994 16:17:26 EST
Content-Type:
text/plain
Parts/Attachments:
text/plain (58 lines)
          Mark, thank you for your extremely lucid and well-informed
          presentation of the pros and cons of current implementations
          of Web servers and clients.
 
          I think the issue of "where does the processing occur" over
          the network or locally, and do you have to pump all these
          bits through this narrow 1200 baud passageway has *alot* of
          relevance for museums.
 
          Let's imagine two models of providing images and data of the
          XYZ Garden's 8 million specimen herbarium collection (and I
          *must* emphasize that this is all personal speculation,
          though I am on the NY Botanical Garden staff, I have no
          authority or responsibility for computerization.) The first
          is to sample and store all the visual data on a server, with
          text data associated, and all the associated search
          functions, pointers, etc. This server is at the Garden, on
          the Web, and accessible to all and sundry with Mosaic or
          other Web client tools.
 
          The second model is that low density images, for example, or
          just text descriptions of specimens are distributed on
          CD-ROMS by subscription to libraries and universities (I
          know...gasp, not universal access, but is that so important
          for the *many* specialized collections that are out there?).
          Search tools are either standardized, or provided on the
          CD-ROM itself.
 
          When the search is narrowed to the point where additional
          information is required, or more detailed images are needed,
          the user may access the server through the NET or whatever.
 
          Dare I suggest that this is a revenue stream to the museum,
          analagous to a publishing program. I mean its *expensive* to
          database this stuff.
 
 
          In another example, imagine that there is a standard engine
          for interpolating video images (actually I assume that there
          is, but I'm not technically up with it.) This is stored
          locally as software or firmware or whatever on the client.
          Then all that would be required is the transmission of
          "clues" from the Museum to provide a full motion video tour
          of the museum, or whatever interactive content the museum
          wanted to offer.
 
          All I am saying that Negroponte suggested is that taking
          advantage of computing power on the client side could reduce
          the  clogging in the pipeline. I mean we all have pretty
          zippy machines as clients nowadays, and I am pretty sure
          that the bottlenecks don't take place in our processors,
          hard drives, or video sub-systems.
 
 
 
          Eric Siegel
          [log in to unmask]

ATOM RSS1 RSS2