Hi there!
Jon, Richard, thanks a lot for contributing to the conversation. I had the chance to meet Richard at last Re:Trace Media Art Histories conference in Vienna, thanks a lot for you presentation!
Regarding Jon comments, I wanted to use Rosetta Stone as an example that how it makes very little sense to store things without the means to understand what is written/stored. As Jon points out, it was an exception. In order to avoid losing the content, preservation should provide a continuous “translation”. Sounds like bringing “amanuensis” back from the middle ages -or even Egypt- to 21st century, but to me it doesn’t sound “that” crazy -well, maybe it is crazy considering the resources that may take. As Jon suggests, "ensure that a dedicated community has access to and permission to re-create their cultural heritage, so that each generation acts like a human Rosetta Stone”.
I would like to add that it would be needed to “rate" that translation/migration/re-creation/re-interpretation, in other to keep the “meaning” -or as much of it- of the original. When it comes to images or moving images, technically the problem is very much solved -for example FFmpeg- and new codifications can be compared in order to rate the translation.
When it comes to installations, custom hardware or computer based works, it would be more interesting to abstractly describe the work, in a systemic way* which will give the essential information for re-interpretation or re-creation of the work. Here, engineers and tech-background people should be taken into account and even take part of conservation teams in institutions. How to rate those would be more complicated, but I can think of ways of doing so.
Regarding context, of course it is essential. But -again, just my opinion- we can not try to keep all the context. A lot of the context of the artworks in the past has been lost and probably some of the works would never be totally understood because of that lost -and I don’t think we need to go 5000 years back- But that doesn’t mean that now we have to achieve a full conservation of the context. Sometime I feel like considering that we technically have achieved a very high degree in conservation, we feel like we need to conserve absolutely everything regarding an artwork. That would be the optime case. But some times optime is against what is good. Sorry if my ignorance makes me too bold and of course here I am being too much of an engineer and too little of a conservator -I can only acknowledge my lack of background in that field- but maybe we are losing the chance to preserve our heritage because of trying to save too much.
During the Re:Trace conference I mentioned before where Richard saw Patricia Falcao's slide on the Tate Collection, there was a panel on Media Art Conservation where most of the speakers agreed that a “standard” was needed. I asked how that standard would look like and Patricia gave a great answer -again, my opinion- saying it should be open source solutions. Considering the huge amount of information and effort needed, I think it is reasonable to think so. But that means making contents available to people, which may not be easy to sallow for some players in the art world. Sometime galleries don’t let anybody make not even a migration, but just a copy of a software piece to a working computer, wielding it will not be the “original” anymore. In these cases usually it has more to do with business -who can charge and how much for a work- that with conservation itself. I think we should be more open to who and how can perform and allow those tasks. I am not talking about an open bar, but that “dedicated community” Jon mentioned should be open to more, different profiles. Sometime, half joking half seriously, I think of IT and Building Maintenance departments -and their budgets- being included in the conservation of these artworks. Connecting with Richard’s post, this will help to increase not only the funding but also having the right knowledge when it comes to certain materials!
————————————
* Sorry to be self-referential. Regarding those two topics, I have not been the first or the only, but fore example I have programed applications that compare the difference between codifications at pixel level as a mean to compute a “distance” between them as a reference of how close the re-codification is to the original. Which is not super useful since codification has more to do with perception than with pixel's value. Regarding to system approach to new media artworks, I presented a poster on that topic at Re:Trace and a couple of papers can be found at https://independent.academia.edu/DiegoMellado
> El 2 ene 2018, a las 1:42, Jon Ippolito <[log in to unmask]> escribió:
>
> Hi everyone,
>
> Despite Sara Diamond's request, I don't have the willpower to steer this discussion completely away from the siren of media preservation. I can, however, try to relate preservation to the issue of what it means to make art in public in the 21st century.
>
> Although I find Johannes Goebel's distinction between documents and performances interesting, I have little confidence that storage innovations like M-disks will be of much value in storing either. Let's forget for the moment that M-disks are a proprietary system untested by outside experts and focus instead on the bits themselves. Diego Mellado points in the right direction when he hopes for a digital version of the Rosetta Stone, which of course is a translation matrix among Greek, Demotic, and Egyptian hieroglyphics. It's the poster child for perseverance through durability.
>
> Unfortunately, the Rosetta Stone is the exception, not the rule. Most ancient artifacts contain only one language, and without context or historical continuity, there's little basis to figure out what those languages mean. Take Linear A, the script used for 1000 years on ancient Minoan tablets. The characters of Linear A are all clearly legible; they've even been added to Unicode for chrissake. But archeologists have only wild guesses as to what those beautifully preserved documents could mean.
>
> To be sure, pulling an M-disk out of storage after 50 years is certainly different from pulling a Phaistos disk out of an archeological dig after 5000 years. So, assuming the next 50 years doesn't involve us dying in a climate catastrophe or nuclear hellfire, it's tempting to assume cultural continuity will allow us to trace back the roots of whatever language was stored on the M-disk. Tempting, until you realize that there is almost no cultural continuity between the digital languages on the documents stored in your attic. When WordStar gave way to WordPerfect in the 1990s, and then WordPerfect gave way to Microsoft Word in the 2000s, it's not like each company demurely gave their source code to the next in the name of cultural continuity; most take their proprietary source to the grave.
>
> The open-source transcoder FFmpeg is probably as close as we have to a digital Rosetta Stone, and it has to translate between a hundred different languages, all of which cropped up in only the past couple decades. And that's just for linear, audiovisual recordings--there is nothing comparable for interactive artifacts. It's hard enough to translate among the dozen metadata standards used by archives and museums--and those standards were introduced explicitly for access and preservation!
>
> Perhaps more relevant than decipherability for this month's topic is the loss of context that accompanies archiving a recording on a disk. The biggest innovation in the moving image in the last decade has not been in formats, but in networks. Hollywood films are screened less in individual theaters and binge-watched more in Netflix and Hulu streams. Home movies have shrunk to six-second bursts, and their value is no longer self-contained but lies in their reverberation across Instagram, Twitter, and Facebook, where they gather references, remixes, and sometimes political clout.
>
> The information in a network, unlike a hierarchy, grows exponentially as the networks scales up--and today's networks are planet-sized. That's one reason the library of Congress decided this month to archive tweets selectively instead of exhaustively. But the information is still out there, on the cloud servers of the providers themselves, and digital sociologists with access to those data troves could reconstruct chains of creative reuse that would require years of a painstaking research by art historians to do manually. As recent US court cases attest, the Internet is increasingly where public events take place, and it may serve both as the stage for performing them and matrix for preserving them.
>
> Sadly, I know very few preservation initiatives designed to capture network context. Some ventures, such as the Preserving Virtual Worlds consortium, have settled for documenting MMORPGs rather than re-creating them as a multi-user experience. As is often the case, however, new media artists have leapt forward where archivists fear to tread. For his "collider" website Jackpot [1], Maciej Wisniewski appeared to search the Web randomly but in fact drew on a predefined pool of websites, a strategy that with some modification could allow the work to be re-animated today. Olia Lialina's Last Real Net Art Museum [2], meanwhile, envisions a preservation model closer to performance, in which a key work of net art, My Boyfriend Came Home from the War, perseveres by being reinterpreted by subsequent "performers" in new media with new inflections.
>
> Re-collection, my book with Richard Rinehart [3], is full of examples of cultures that have survived by networks of re-interpreters. I just learned another from Prumsodun Ok's TED talk about Khmer classical dance [4]. Like most dance traditions, Cambodia's native choreography is passed person to person, by apprenticeship and reinterpretation. In the 1970s the Khmer Rouge systematically murdered every Cambodian who could speak more than one language or wore classes because they represented the pre-communist cultural order. According to Ok, Khmer classical dancers were among those targeted, and 90 percent of its practitioners were exterminated inside of a few years. Yet the resilience of proliferative preservation is strong enough that those who did survive succeeded in training a new generation of performers to keep Khmer dance alive.
>
> It's not the first time indigenous art has survived the onslaught of a powerful force bent on cultural genocide, and it probably won't be the last. The secret to these success stories of preserving complex cultural artifacts isn't M-discs, MPEGs, or Mini-DVs--and as much as I long for an officially funded infrastructure, it's not that either. The secret is protocols that ensure that a dedicated community has access to and permission to re-create their cultural heritage, so that each generation acts like a human Rosetta Stone.
>
> I'm curious if any of the younger folks on this list (Diego? Anne-Sarah?) have any thoughts on this. In the meantime, cheers from the arctic blast, where it's cold even for Maine.
>
> jon
> ________________
> Study Digital Curation online
> http://DigitalCuration.UMaine.edu
>
> [1] http://www.adaweb.com/context/jackpot
> [2] http://myboyfriendcamebackfromth.ewar.ru
> [3] http://re-collection.net
> [4] https://www.ted.com/talks/prumsodun_ok_the_magic_of_khmer_classical_dance
>
|