Kim H. Veltman
“Challenges to Digital Culture”, Managing Heritage Colllections II: Continuity and Change, An International Conference Organised by the Estonian Ministry of Culture,
Tallinn, May 2005, Tallinn: Ministry of Culture
Culture is about how individuals, groups, peoples and nations 1) express themselves, their history and their relation to universal themes in particular, unique ways; 2) how they conserve these expressions and 3) how they communicate these expressions. The medium chosen affects all three aspects of culture. Most discussions of the shift from analog(ue) to digital communication focus uniquely on the third aspect. We shall outline the first two aspects and then focus on this third aspect of new access; explore some of the changes that are occurring, assess the present state and point to a number of possibilities, challenges and dangers. The lecture accompanying this paper will focus on examples of technological developments to illustrate these possibilities.
The shift from analogue to digital media is something much more profound than just another technical advance. Whereas earlier advances set out to replace their predecessors, digital media introduce, for the first time in history, a potential to move between, and among media and senses. Hence, something that is handwritten as input form can be output as printed text or even, potentially, as oral speech. Something that is spoken as input form can be output orally, in print, as the equivalent of handwriting or, using stereo-lithography, even in cuneiform.
The time-frame of these fundamental shifts is long. Moveable type for printing was introduced in Korea around 805 AD. It then moved to China. Gutenberg in the 1450s did not invent a new technology. His contribution was to use to the technology for sharing knowledge. He went bankrupt. It took another 150 years before the “real” impact of printing became fully clear. This time-frame is instructive when we consider the computer revolution. Early mechanical computers were developed in the 1880s. The first programmable computers was officially developed in Manchester in June 1948. The Internet began in Britain in 1968. During the first 32 years of its existence the Internet grew to 100,000 hosts. In the 1990s with the advent of new markup languages (HTML), protocols (http) and browsers (Mosaic, Netscape, Internet Explorer) the Internet grew to 200 million users. In the past four years there have been 600 million new users.
The past 35 years have seen a transformation in the scope of networked computing from a narrow Internet for specialized communities in high energy physics and astronomy to the vision of a World Wide Web which is accessible and usable by all citizens. Hand in hand with this enormous growth in the use of networks, has been an equal growth in new technological solutions. Processing speed, disk capacity, transmission speed, which once posed seemingly insurmountable hurdles, are not longer primary stumbling blocks. An enormous range of possibilities exist. Technology continues to evolve. Increasingly the real challenges are organizational, psychological and political. Careful and sustained efforts are needed to achieve them. One useful attempt to outline some of these challenges is a Digicult Report on The Future Digital Heritage Space (December 2004). In our view, in the cultural field, these challenges lie in three main areas: expression (especially new creativity), conservation and communication.
Media affect fundamentally what can be expressed. In pre-literate times, the choice to use petroglyphs greatly limited the range and subtlety of images that could be created in terms of expression. In terms of conservation, the method was very efficient although, as the experience of Lascaux has shown, this conservation is threatened severely once there are numerous visitors. In terms of communication, petroglyphs as such mean that only those who are on location can see them.
Subsequent innovations to create expressions as, totems, precious objects, sculptures, icons, paintings, illuminations etc. greatly increased 1) the range of expressions, 2) their portability and 3) their reproducibility. Our histories of art have focussed largely on the aesthetics of expressions. We still need a historical study that explores the role of portability and reproducibility of media in the development and diffusion of culture. We need new models of culture, which avoid the pitfalls of simplistic, monolithic, progress and yet help us to understand the contexts a) whereby one culture fosters diversity and richness of expression while another stifles and b) why and how one culture produces expressions that increase understanding and tolerance of others, while others produce frameworks of intolerance and exclusion. Culture is not just a question of producing and creating cultural artefacts, tangible and non-tangible. It is also about belief and value systems that can either foster humane inter-action or threaten humanity.
In the realm of new expressions there are fairs such as ARCO in Madrid, and there is an important Encyclopaedia of New Media, which offers an overview of some of these new forms of expression. Projects such as Artnouveau set out from the premise that:
advanced information technologies…could not only augment the attraction of cultural heritage, but also provide new tools for the artistic creation process.
In Portugal’s Alberto Sampaio Museum, visitors experimented with immersive virtual painting. Put together by historical, cultural and computer graphic specialists, the system took people back in time and space to a monastery where the original frescoes were located.
Interaction with this virtual art exhibition was limited to entering the real exhibition room and viewing the works. In this respect, it mimicked a visit to a traditional gallery, requiring little effort from the person doing the viewing. …we discovered that people do not want digitised replacements of the original works. Technology of this kind will need to be incorporated into exhibitions, so that people can compare both versions.
Theoretically, the advent of digital media offer almost infinite possibilities of introducing new interplay between traditional cultural objects, and the audio-visual realm with forms of expression such as video, television and film. Here several obstacles lie in the way of realizing this potential. First and foremost, fear in the music and media industries that persons might simply create pirate versions of complete pieces of music, and entire films and videos has meant that editing tools have remained largely designed for use by industry rather than by individuals.
In the literary world we have learned to make a basic distinction between a) copying an entire work without permission and acknowledgement, which is plagiarism and bad; and b) making properly acknowledged citations or even subtle allusions to existing works, which is a source of creativity and is good. This ability to use and possibly edit a visual quote from an existing film or other new media source remains one of the most obvious challenges for the decades to come. It requires rethinking our approach to copyright, whereby we continue to enforce banal attempts at piracy and plagiarism, while enabling and encouraging new methods of making visual citations from such works.
In the meantime, we have simple aids such as Microsoft Office Picture Manager within Microsoft Office. There are a number of Paint programs. High level editing software such as Alias’ Maya, which used to cost over $80,000 only a few years ago, has plummeted to just over $6,000, but this is still beyond the reach of everyday users. There is a similar situation in the video editing field. Although some very cheap methods exist, professional systems typically range from $20,000 to $500,000. High level projects such as Virtual Director illustrate technological possibilities. There are various developments in the realm of digital cinema, but these again are largely addressing the needs of industry or the frontiers of research rather than their potential use by the public at large.
If we stand back from the rhetoric of salesmen who are continually trying to persuade us that the latest technology does effectively everything that we might imagine, we are confronted by a very different picture. The 20th century produced far more than 100 million hours of audio-visual material in the form of films, television and video. Yet less than 1 million of those hours are readily accessible even to the industry. Many millions of hours, some pessimists would say many 10s of millions of hours, are threatened with extinction if new methods of conservation are not adopted almost immediately. European projects such as PRESTO are addressing this problem. Ironically, they are working largely with the MPEG1 standard rather than the latest MPEG 4, 7 and 21 standards.
In short, our society is producing new materials much more quickly than it is developing reliable, long term methods a) for their preservation and conservation; b) for systematic documentation of their contents; c) for systematic access not only by industry but by citizens generally. In theory, public television programmes and films publicly supported by national film boards, paid for by taxpayers, should be accessible to the public at large. The challenge is not simply access but also having tools whereby this enormous resource can become a starting point for new creativity. Commissioner Oreja’s vision of a film and television network has not yet become a reality but could be an important ingredient in this process.
Conservation is fundamental to all domains of culture. Without it there is no collective memory. Without collective memory there is no historical, cumulative sense of cultural heritage. In the past decade there has rightly been a wave of attention to the challenges of conserving newly created “born digital” objects and maintaining access to digital, electronic versions of our cultural heritage. The European Resource Preservation and Access Network (ERPANET) has drawn attention to these challenges. Less attention has been given thus far a) as to how new media could provide important new tools for access to conservation and restoration methods of cultural heritage as a whole and b) how these materials also offer important new resources for research and education.
In the 1980s, there were visions of creating an international network for conservation materials. This led the Getty Conservation Institute to build on the resources of the Canadian Conservation Institute and resulted in an online version of the Abstracts of the International Conservation Institute (AATA) in association with the International Institute for Conservation of Historic and Artistic Works. In Europe, there are now over 20 serious conservation resources, including the European Commission on Preservation and Access (ECPA)and the European Confederation of Conservator-Restorers' Organizations (ECCO). In France, the Centre de Recherché et Restauration des Musées de France (C2RMF) provides an excellent example in the direction of a network of conservation laboratories within a given country, especially though their Chimart initiative.
At the level of specific projects there have been extraordinary developments in this domain. The restoration of the frescoes by Piero della Francesca of the Story of the True Cross in the Church of San Francesco in Arezzo is a brilliant case in point. This was one of the first projects where the entire fresco was digitized systematically before the restoration and various possible interventions were simulated electronically before they were adopted in the restoration itself. A wonderful archive of materials was developed. To date only a glimmering of these materials is available online. It would be an enormous contribution if a) the complete archive of such materials were made fully accessible to restorers and conservators and b) subsets could be made available to researchers and the public at large.
European projects such as Artnouveau, mentioned earlier, have simulated the possibilities offered by new media in the conservation field:
Armed with a virtual reality (VR) headset, users moved around the room to observe the monastery’s original paintings and stained glass windows. They also picked up tips about conservation and preservation, ‘renovating’ damaged parts of the fresco with a VR brush and colour palette. For those familiar with computer games, the process was easy and instinctive. 
An important project on Molecular aspects of Ageing in Painted Works of Art (MOLART) illustrated potentials of sharing conservation materials among different labs. Lacking and very much needed is a network whereby such new techniques can be linked with the databases of conservators and restorers in various countries and related to databases in memory institutions (libraries, museums and archives) such that they can share materials and experiences concerning individual works and artists within their collections. This would be a very useful practical application of the virtual agora concept proposed with a Distributed European Electronic Resource (DEER, see § 4 below).
The obvious advantages of such a network lie in improving conservation and restoration by sharing cumulative experience, methods and approaches of experts in the field. In addition, these materials potentially have profound implications for our study of art history and cultural history. Beginning with x-rays, conservators and restorers have over the past half century produced a whole range of techniques for examining evidence below the surface of paintings and other cultural artefacts including Ultra-Violet (UV) fluorescence; Infra-Red (IR) photography in pseudo colours; Infra-Red (IR) reflectography and X-radiography. Such techniques, combined with Multi-Spectral Imaging (MSI) and electron microscopy are, for instance, transforming the field of papyrology and as a result more traditional fields such as Classical Studies and Biblical Studies.
These tools reveal a wealth of information about the techniques used in creating the work of art and various stages in composition which could potentially transform our present approach to art history and indeed to the whole of material culture. In short, the knowledge of conservation and restoration is not only vital to keep our heritage “alive”: it offers invaluable knowledge concerning the ways in which this heritage was produced. In cases such as burned papyri and palimpsests, the conservation methods allow us to read and study what was previously invisible. Conservation was once the domain of a handful of experts in a few advanced labs. In terms of memory institutions the history of interventions and accompanying documentation needs to become a regular category in searching for knowledge. In terms of education and learning, the knowledge gained by these experts needs to be shared and integrated into our educational system. The changes brought by interventions for restoration and conservation are essential dimensions for understanding their history. Intimately connected with these changes in restoring extant objects are the reconstructions that have been made of objects that are no longer extant. These too need to be integrated as a clearly identified category within our knowledge structures and become a part of curricula within our educational systems.
During the 1970s, as the Internet became an established mode of communication, first largely through e-mail, there was constant attention to the technological limits of processors, storage, and transmission. As the hype about computers spread in the 1980s, there was a recognition that something more was needed than technology in isolation. The phrase “content is king” became a buzzword. In 1996, Bill Gates made the phrase famous.
Many assumed that this discovery on the part of industry would soon guarantee that the technological hurdles would be resolved. To some extent this attitude was justified. The past decade has certainly brought incredible technological progress and there have been many unsung heroes as networks which were originally designed on an ad hoc basis exploded in their applications to meet the needs of hundreds of millions of persons around the world. In Europe today the cost of hardware for a simple new computer is c. ˆ 300, while the cost of the software (Windows plus Office) is ˆ200+. In the realm of personal computing, software increasingly costs more than hardware. The good news is that software crashes are much less frequent than in the 1980s. The bad news is that after a half century of software we still do not have fully reliable systems. The promise of open source remains but does not as yet have the political support to make it a standard procedure.
Meanwhile, those who assumed that such commercial interests would “sort out” the market potentials in the cultural field, have slowly recognized that the interests of industry, which are understandably focussed on making money, are sometimes or even often at odds with the interests and needs of cultural institutions and the needs of citizens who, as taxpayers, expect access to their heritage. In some sectors, especially the music and publishing industries, there are some forces which are concerned only with financial gain at the expense of both authors and users. Those concerned with grid computing, are working on new levels of middle-ware (which now includes upper-ware and under-ware), that will allow industry to monitor and charge for individual access at will. Thus far the potentials of these tools for helping citizens has been largely ignored.
Within the European Union there is a commitment to increase the research budget to 3% of Gross National Product (GDP). On the surface this is excellent news which might ensure more attention to citizens’ needs. There is, however, also discussion that the EC will in future provide 1 euro for every 2 provided by industry. If industry pays for the majority of development, their insistence on have a majority of the say with respect to the direction of research is likely to increase.
This is not simply the fault of industry. Industry will make money on everything it can. As cultural institutions come under increasing pressure to “perform” financially they too often take a similar attitude. In 1995, there was a proposal by a French company, working in a European project, that users should be charged separately for every page they viewed on the Internet. Fortunately the proposal was blocked. If such a proposal rightly sounds absurd today, it is sobering to note that Bibliothèque Nationale de la France (BNF) at present expects to receive ˆ20 for a 2000x3000 pixel image, ˆ40 for a 4,000x6000 pixel image, as well as ˆ50 to use one black/white image online and ˆ110 euro to use one colour image online.If these prices were to remain in place and become standard practice, the vision of exciting new web sites by individuals would be doomed. Indeed the hope of providing researchers with new possibilities of communicating their work would be equally doomed. Digital media must offer new possibilities not only to the rich but to all citizens.
Fortunately, in this case, the examples of history are salutary. Less than a decade ago, when the catalogues of the Bibliothèque Nationale were scheduled to go online, there were plans to charge for access to these also. Similar plans existed at the British Library. Behind the scenes one remarkable gentleman persuaded a handful of research universities (notably Oxford and Cambridge) to put their collections online free of charge. This amounted to over 8 million titles. The British Library was then “embarrassed” into not charging and Paris and other major libraries followed suit. It helped that work towards a union catalogue in Germany, which had been underway since the 1970s, had since resulted in a working system with access to over 40 million titles. Today, the good news is that these national catalogues are freely accessible for France, Germany, Italy, the Netherlands and other countries. This model needs to be extended to museums where we have excellent lists of online museums by Jonathan Bowen working in the context of ICOM, but no systematic access across collections.
As the business community became more interested in the potentials of computers and the content they might process, there arose a vision that one might control the entire production cycle not only in the case of physical machines, but also in terms of knowledge and culture. By 1988, companies such as IBM had developed a vision of digital libraries whereby they would help scan images from major collections and also try to own the rights to those collections. This led to projects at the Catalogo de los Indios, the Vatican Library, Luther Library (Wittemberg), Hermitage (Saint Petersburg), Edo Museum (Tokyo) etc. DEC (Digital Equipment Corporation, now HP) also began a campaign of buying up collections of content around the world. In the early 1990s, the two companies were, for instance, vying for various collections within the campus of the University of Rome (La Sapienza).
The idea, of course, was hardly new. William Randolph Hearst, who was immortalized in Orson Wells’ Citizen Kane, on his first trip to Europe as a six year old, on seeing the Louvre, is reported to have said to his mother: “I want one of those.” By the time that Bill Gates had “discovered” that content was king, he too developed a vision of acquiring rights especially to images of great works. This led him to found Corbis. As irony would have it he was soon outmanoeuvred in this game by a competitor, Getty Images, which bought the entire Kodak archives, and now owns over 70 million images, considerably more than Mr. Gates.
In Europe there was considerable worry that Mr Gates and other large companies would try to buy up the rights to culture. The fears were not unfounded. In November 1995, Mr Gates attempted to acquire the rights of the whole of Hungarian art. The Memorandum of Understanding (MOU) for Multimedia Access to Europe’s Cultural Heritage was designed to defend against this. The MOU, with over 700 signatories confirmed that the cultural community was genuinely interested in the problem. It provided a useful survey of existing technologies and challenges but it provided no funding. The MEDICI (Multimedia for EDucation and employment through Integrated Cultural Initiative) Framework was intended to take this further. There was great interest: the number of signatories practically doubled. Further lists of available technology and surveys of challenges emerged but the project was blocked and again there was no funding. On the surface these projects failed. Fortunately, the ideals which they championed won the day.
Major libraries, museums and archives have accepted that since they are publicly-funded bodies, they have a commitment to disseminate their treasures, which includes making basic versions of their collections accessible free of charge on the Internet. They have recognized also that this commitment certainly does not exclude the possibility of selling high level reproductions of these same collections in the manner that postcards, posters and other reproductions were sold (especially as souvenirs) in the past. The stores of the Réunion des Musées de France (RMN) are perhaps the best example of the potentials of this approach.
The good news is that there has been an explosion of access to images of all types including cultural images. In 1995, only a handful of museums such as Uffizi and the National Gallery in London had access to some works in their collections. Major browsers had no image searches. Google launched its Google Images on an experimental basis in 2002. By January 2004, the number of images accessible was over 400 million. In January 2005, that number had grown to 1,187,630,000 images, while the number of web pages had grown to 8,058,044,651. The bad news is that the potentials if this explosion have yet to be harnessed. The Google search engine is designed for a single word and is not yet designed for refined or complex searches.
There are parallel problems in the museum world. Early pioneers such as the Uffizi have not always kept up. A survey of major museums reveals much progress. Two of the best examples at present are the National Gallery in London, which gives some access to all the 2300 paintings in its public collection and the Louvre, which has an database that provides access to 30,000 images amounting to 98% of its exhibited works. The photographic collection of the Réunion des Musées Nationaux (RMN) has a Photo Agency with 500,000 images of which 140,000 are in digital form. This is one of the best collections to date.
Full text versions of books and manuscripts from museums and libraries are also coming on line. For instance, the Ambrosiana in Milan, provides folio by folio free access to the complete Codex Atlanticus. The Royal Library at Windsor has similar plans for Her Majesty’s Collection including Leonardo drawings and writings. In the United Kingdom, the Joint Information Systems Committee (JISC) has co-ordinated a very important project to scan in the full texts of 125,000 early English Books which are accessible free of charge to every college and university in the UK. A limitation is that it is not yet freely accessible to English taxpayers as a whole let alone to persons everywhere. By contrast, the Gallica project at the Bibliothèque Nationale de France (BNF) is scanning the full texts of 70,000 works. Most of these are accessible in facsimile form. Thus far 2,500 works are accessible in full text form such that one can search every word. This collection is available publicly and free of charge. In addition, the French Ministry of Culture now has a database of 2 million objects which have been digitised. A next step might be to extend this principle to all European countries. Even so, already now such projects are vitally important because they confirm that large scale full text access is fully possible: something that even a decade ago was dismissed as utopian.
For all the progress that has been made, access is still very much a problem. The Metropolitan Museum of Art in New York has a brilliant system for about 6,500 images. But its collection includes over 2 million objects. Other world level collections such as the Staatliche Museen Preussischer Kulturbesitz as yet offer no access to images of their collections. In most memory institutions only a fraction of their collections are on display to the public. One of the great, unfulfilled potentials of electronic communication is to provide public access to these immense resources, which are not on display to the public: in some cases as much as 95% of what they have.
Very much lacking is a co-ordinated plan whereby collections in Europe become accessible in a coherent form. Over the past fifteen years the European Union has supported a series of projects ranging from Remote Access to European Museums and Archives (RAMA) to the Eurogallery, none of which has resulted in an ongoing working system. Perhaps the only serious example of a working system is the Virtual Museum of Canada (VMC), which offers distributed access to materials from 1075 member institutions from across the 8,000 kilometer expanse of Canada. The Canadian Heritage Information Network (CHIN) which has developed the VMC has also been a pioneer in creating some of the first online virtual exhibitions that combine materials from collections in different countries: e.g. landscapes from both Canada and Russia.
A European Union project, E-Culture Net has outlined the need for a Distributed European Electronic Resource (DEER). This would comprise three elements: 1) a distributed repository (with the contents of libraries, museums and archives); 2) a virtual reference room and 3) a virtual agora, which would enable persons collaboratively to share and discuss these materials.
Culture is predominantly local, regional and national. The European Union (EU) and the European Commission (EC) are at an inter-national level. At its inception there was a clear decision that the Union should largely leave cultural matters to individual states for much the same reason that Germany has traditionally left this to its individual “provinces” (Länder). At the time of the Lisbon Strategy efforts to include the word culture were not successful. The latest high level report has not changed this stance. Meanwhile, an important book by Giorgio Ruffolo, a Member of the European Parliament, has outlined why access to local and regional culture is vital for Europe in a book aptly entitled a Unity of Diversities.
To be sure, culture is not entirely absent from the work of the European Commission. There is a Directorate General (DG) of the EC devoted specifically to Education and Culture (former DG 10+ DG 22). This DG has excellent intentions but has very little funding. Traditionally the technology dimension of cultural heritage came under DG CXIII: Advanced Communications and Technology Services (ACTS). With the restructuring of the EC, this section dealing with cultural heritage was moved to Luxembourg and formally became a part of the Information Society and Media DG. It acquired various names including Digital Heritage and Cultural Content. It is now called Directorate E 3: Learning and Cultural Heritage (figure 1).
Within DG E.3 in the sixth framework programme the main integrated project is BRICKS (Building Resources for Integrated Cultural Knowledge Services), which “aims at establishing the organisational and technological foundations of the European Digital Memory (EDM)” and two networks of excellence, EPOCH (Excellence in Processing Open Cultural Heritage) and another on Digital Libraries (DELOS). Another project, TEL-ME-MOR (The European Library: Modular Extensions for Mediating Online Resources) specifically aims at integrating the 10 new member states.In addition, the unit is supporting MINERVAPLUS (Ministerial Network for Valorising Activities in Digitisation PLUS), a significant effort to co-ordinate digitization criteria/standards at a policy level among various member states.
E.2: Knowledge and Content Technologies
E.3: Learning and Cultural Heritage
E.4: Information market
E.6: eContent and Safer Internet
E.7: Administration and Finance
Figure 1. Information Society and Media Directorate General (DG) and its directorates.
There are at least two fundamental dilemmas with all these projects. First, they are short term, normally 2-4 years. The issues surrounding digital cultural heritage are long term. They require co-ordination of databases from local, regional, and national libraries, museums and archives. This cannot be achieved in a 2-4 year project. Needed are ongoing structures that enable allow major and minor institutions to co-ordinate their data structures. Projects such as GABRIEL (Gateway to Europe’s National Libraries) and Bibliotheca Universalis point in the right direction, but do not yet address the crucial challenge of linking local and regional resources with national ones. Projects such as the national union catalogues are tackling these challenges of linking catalogues from small local and regional libraries with their national counterparts. The high-level of efforts of Bibliotheca Universalis (originally a G7 pilot project) and GABRIEL need to be integrated with the emerging national catalogues with respect to their authority files.
A second dilemma entails compartmentalization. For obvious administrative reasons the structure of the European Commission is divided into many sectors as reflected in figure 1. There is similar compartmentalization in government structures. France, for instance, has excellent work on the key themes of culture, but much less interaction between these themes. Meanwhile, the digital revolution is affecting not just museums, libraries and archives, but the whole gamut, including infrastructures and audio-visual production. Culture is not only about content in the form of objects that need to be digitized. It is also about different knowledge structures, assumptions, attitudes about and approaches to what we can know, and what is worth knowing. This means that culture is also very much about interfaces. Within the framework of the European Commission this domain is officially covered by DG E.1., which has a number of interesting projects, but effectively none that reflect cultural and historical dimensions. Regrettably DG E.1, which is now responsible for earlier activities in language technologies, has thus far downplayed the importance of multilingualism and ignored important pilot projects such as Accès Multilingue au Patrimoine (AMP).
In the past, one important reason for keeping culture out of these technological developments was a fear that one “take” on these questions could or would be imposed across the board. Ironically, if the issue is avoided then the standard solution of the day implicitly imposes precisely such a conforming solution. Needed is a new framework which addresses how emerging technological infrastructures can help create access not just to content in different languages, but also to different knowledge systems, different mind sets, parallel epistemologies. This new area entails study into at least two important areas: how cultures develop 1) different relations between cultural artefacts and the physical world and 2) different strategies as to when to communicate what to whom. For the purposes of this essay a mere outline of these problems is possible.
In Western culture there was an assumption that aesthetic distance is something positive. From the 16th until the mid 20th centuries there were efforts to distinguish clearly between subject and object; between subjective and objective ways of knowing and modes of expression. Art, and culture generally, were seen as means of helping to separate us from nature. By contrast, in the East there is a long standing tradition whereby art and cultural experiences serve to unite us rather than separate us from the physical and the metaphysical worlds. In the Slavic tradition, which has strong Byzantine and ultimately Greek influences, there is the notion of icons and iconostasis. The icon can be kissed but this does not unite us with the icon or what it represents. It points to, brings us into some relation with a world beyond without pretending to unite us. Like the Greek middle verb, it is something in between the quest for union of the East and the simple subject/object distinction of the West.
In pre-literate cultures (formerly called primitive cultures) there is an even greater quest that their cultural objects – which we in retrospect see as their art—should perform a function of connecting them with a world beyond. Interestingly enough, we have modern terms such as magic, totemism and shamanism that attempt to reflect this approach, but we have effectively no methods to reflect such completely different approaches in our interfaces. Google “pulls up” images of a Renaissance Italian painting, a Russian icon, a Buddhist statue and an aboriginal totem as if they were all just “hits.” There is a grave danger that this approach will undermine our awareness of significant different approaches to the world, rather than opening our awareness of and respect for alternative ways of being, seeing, knowing, comprehending, and expressing.
Thinkers such as Innis, McLuhan and Matellart have made us aware how a given medium tends to privilege a given kind of knowledge and given mode of communication. McLuhan claimed, for instance, that the shift from oral communication to one with a written alphabet in Greece of the 5th c. B.C. led to a differentiation between grammar, dialectic (logic) and rhetoric (effects of language) with more emphasis on logic and that the shift from written to print culture in the 15th century brought a triumph of logic over grammar and rhetoric. He claimed that television brought a shift back towards the effects of language (rhetoric).
If we accept that the shift from analog(ue) to digital entails convergence in a special sense that it allows a translation from one medium to another at will, then the essence of the digital revolution lies in its being the first innovation in media which does not limit us to imposing a single kind of knowledge or unique form of communication. The deeper implication of the so-called multi-media revolution, which might more aptly be called an omni-media revolution, is that it enables the use of multiple ways of knowing and communicating. Theoretically we should be able to develop very different interfaces to reflect our very different approaches to knowledge, our attitudes towards truth and veracity and even the extent to which we reply with long answers, short proverbs or silence.
Very simply, those who have developed the Internet thus far have attempted to create one mode of access to all electronic resources. As we know from Google this brings tremendous results. In theory, the many efforts towards a semantic web will take us further. Thus far, however, these efforts are limited to creating new services for what might more accurately be called a transactions web. They are concerned with removing ambiguities of meaning to permit effective machine to machine communication. They are not interested in increasing our awareness of the subtleties of human communication and of the richness of ambiguities wherein lies the essence of the human spirit. For the purposes of business and commerce the monolithic approach to unequivocal meaning, the quest for “disambiguation”, is an excellent step forward. For the realm of culture, this quest goes in the opposite direction of that which is needed. In the realm of culture, we need new ways to appreciate and understand not only equivalents but also ambiguities and differences; not to remove them but to see them more clearly. In mediaeval scholasticism the phrase distinguendum est was a point of departure for learning. As we move into an electronic, digital realm we new equivalents and versions of this well established method.
These differences include different strategies of how and when we share what with whom. In the West, a rhetoric of freedom of information means that we are theoretically committed to making everything available to everyone at anytime. In practice, official secrets in politics, defence and increasingly in industry impose serious limits to this ideal. Even so this quest for open sharing is very different from pre-literate (e.g. aboriginal) tribes and societies who believe that what we see as their cultural objects link them with a sacred world beyond. For these persons advanced knowledge of what we would call their shamanism, religion, belief systems, medicine, magic is only “fit” for elders who have reached a given level of initiation – a notion that remains in secret societies even in the West. Putting all the knowledge of these pre-literate persons online in an undifferentiated way would be both a threat and an insult to their traditions, as Indians in North America and Aboriginals in Australia have insisted with increasingly vehemence.
Needed therefore are new kinds of filters. These are not about censorship as such although they might well appear so. All materials and knowledge can be online. Access to certain levels of knowledge would require acceptance and consent from the guardians of that knowledge, not unlike the way that access to university libraries requires a student card or equivalent. Implicit in such a filter which introduces layers of knowledge is a whole new area of study: namely, the history of how cultures shift from a quest to keep knowledge for themselves towards contexts whereby they seek to share their knowledge and insight with others. The history of censorship and cryptography are peripheral aspects of this more complex approach to cultural history. The extent to which we are able to incorporate these alternative approaches into our evolving Internet, may prove to be one of the most important defences of what we think of as a democratic approach to the world. We need to maintain a vision of open society as Karl Popper urged, without falling into the pitfalls of opening everything, everywhere to everyone. This requires a more critical and subtle approach to the rhetoric of anything, anytime, anywhere that is now typically associated with the ideal for online learning.
Enormous progress has been made with respect to interoperability of hardware, software and systems in general. We have outlined some challenges. A number of major challenges remain. All over the world, physical networks are expanding in capacity and range. In Europe, the GEANT network is becoming one of the fastest and most extensive in the world. Alas, the problem of the last mile, which was the stumbling block of the 1980s and 1990s, has not yet really been resolved. In many cases, the gigabit connection reaches the campus, sometimes even the building, but does not reach that part of the campus or building where the content is. A decade ago content was a problem insomuch that it was often lacking. Today cultural institutions and universities have enormous amounts of content which are not yet being shared. There is a great need to complement the existing physical networks a) with extensions to the researchers and b) provide new research frameworks that encourage scholars to share the resources that they have. At a policy level this requires that those in the cultural sector must be provided contexts where their needs are made clear to those in the technology sector.
Part of that mandate must be not only to communicate needs but also to begin creating distributed networks of shared repositories. We have cited examples such as Gallica in France or JISC in the United Kingdom where this has already begun to happen. This process needs to be extended across Europe to develop distributed repositories that reflect the contents of all the member states. This would be a first step towards a Distributed European Electronic Resource (DEER). We have also noted a trend in the library community towards national catalogues which integrate local, regional and national catalogues. This trend also needs to be extended and linked with the foregoing in order to develop virtual reference rooms with cumulative catalogues that enable multi-lingual, multi-dimensional access to the repositories.
As Marshall McLuhan noted many years ago there is a tendency to approach the future looking through a rear view mirror: i.e. new technologies typically adopt the methods of their predecessors. Hence, Gutenberg’s printed edition of the Bible relied heavily on the layout of text from manuscripts. Similarly, today there is a tendency to reproduce online the information from earlier printed and even handwritten catalogues. To take an example from an excellent and scholarly library: the Bibliotheca Ambrosiana provides very detailed notes complete with bibliographical references, but continues to present these as if they were in a printed book. If these connections were hyperlinked they would be much more useful.
It is very important to become much more aware of the increased possibilities of access introduced by digital media. Two examples will serve for the purposes of this essay. In traditional memory institutions there is copious information about variant names. Catalogues frequently display only a small number of these variant names. Both proper names and place names frequently also change from one language to the next. A number of European countries typically have bilingual catalogues (and Switzerland has some quadrilingual ones). A project at the C2RMF provides a concordance of technical terms in 19 languages. Needed are concordances of variant names that bridge all the 25 languages of the European Union and eventually go much further to cover all languages.. While distributed resources are clearly the way of the future, access to such sources requires centralized authority files or reference rooms which link these resources virtually in order to allow us to search via variant names.
The same needs to be done with titles. In traditional library records the card for an individual author’s work would typically provide a standard title of the work in its original language, plus the equivalent title if the edition was in a foreign language. Printed versions of such catalogues (e.g. the National Union Catalogue of the Library of Congress), typically listed publications by author and then alphabetically under the title of the work in translation. In the case of authors with many titles this made attempts to determine the number of editions of a given work a very tedious exercise. By contrast, electronic versions could readily provide an alternative listing via the standard title in order to provide this information. In most cases no new metadata needs to be created. It is “simply” a question of co-ordinating the reference materials which are already in place.
Almost no one can hope to learn fluently all the 25 languages of the EU. No one can learn fluently all the 6,500 languages of the world. A decision to reduce everything to one lingua franca may have superficial attractions, but would imply an enormous impoverishment in cultural diversity Putting in place centralized reference rooms or virtual reference rooms which integrate distributed authority lists, which respect and bridge between the various languages, would mean that persons could potentially search in their native language and still gain access to materials in all the languages of the Union. This would be a new kind of reference room.
If we stand back to look at knowledge over the centuries we can see clearly how new technologies greatly altered and transformed methods of knowledge organization. For instance, the 13th century, which saw a shift from pergament to paper greatly increased the amount of knowledge produced and copied and introduced a then novel idea of creating alphabetical indexes. The 17th century which saw a shift from written correspondence (the world of letters) to printed correspondence (e.g. Transactions of the Royal Society, Journal des Savants) led to secondary literature as we now know it. The latter 19th century, which saw another quantum leap in knowledge production through the introduction of cheaper printing methods brought also the initiatives of Otlet and Lafontiane (via the Mundaneum) to create proper bibliographies for secondary literature.
Networked computers and the Internet are bringing a further quantum leap in knowledge production. Accordingly, we need new methods to provide greater differentiation in both our documentation methods and our search strategies. The rhetoric of Dublin Core may offer a very pragmatic stop-gap measure, just as the trend to distinguish between peer reviewed and other journals. Needed are more fundamental advances.
First, we need further distinctions between kinds of secondary literature: between 1) specialized studies and monographs which focus on a given text or artefact through in depth analysis (e.g. what does Shakespeare mean in Hamlet Act three, scene one, line four); 2) comparative studies which attempt to relate those texts and artefacts to similar stories (e.g. how does Hamlet relate to other stories of princes who feigned madness). 3) studies concerning conservation and restoration of a text or object (i.e. allowing us to see how interventions have changed the object we are studying) and 4) reconstructions (which amount to persons’ assumptions and hypotheses about how objects which are no longer extant might once have looked).
Second, we need more systematic ways of presenting knowledge at different levels. Our catalogues today record whether a resource is a dissertation, a book, a journal article, a magazine or a newspaper article. Yet when we search for an author in these catalogues or in search engines such as Google we typically receive indiscriminate lists where all these types of communication are presented randomly. A relatively slight effort in distinguishing between different publication and media types could make great contributions in helping distinguish different levels of quality in an author’s production.
New access to the enduring knowledge of memory institutions constitutes a very significant dimension of the possibilities introduced by new media. At the same time, these new media are introducing new personal knowledge and new collaborative knowledge, only a subset of which will eventually become a formal part of enduring knowledge. This poses enormous new challenges of finding ways to relate personal collaborate and enduring knowledge. Creating a virtual agora wherein these different kinds of knowledge can be shared and studied is a first important step in that direction.
Recent developments such as the Wikipedia point to enormous possibilities which are being introduced by this trend towards open and collaborative sharing. They also help to identify the shortcomings of sharing indiscriminately. If articles remain anonymous, if there are no methods for assuring that the author involved is truly an expert in the field, then there is no way for outsiders to the field to know how reliable and how profound the contribution is. We need to allow anyone to comment on anything, not unlike the way mediaeval monks had their marginal notes (glossa and scholia) and still distinguish carefully between standard accounts and personal views.
The combination of 1) distributed repositories; 2) virtual reference rooms with cumulative catalogues of authority files and 3) virtual agoras would lead to a Distributed European Electronic Resource (DEER). Such a network could be a first step towards a World Online Distributed Electronic Resource (WONDER).
The shift from analog(ue) to digital marks one of the most fundamental changes in the history of communication. Parts of this transition are happening with enormous speed. In the past decades it has brought access to catalogues with over 100 million titles (e.g. RLIN). It is bringing online full-text versions of hundreds of thousands of books and manuscripts, a figure that is likely to become millions within a decade. Parts of the transition have not yet happened but have a clear time-frame: e.g. the introduction of digital television sometime in the period 2008-2015 (depending on the country). Many aspects of the transition will take much longer. Many dimensions of these changes have not yet even been recognized as problems.
During the 1990s those trying to understand the phenomenon tried to capture it with buzzwords such as multimedia (which was then downplayed as the M word); then with convergence. Since 2000, the emphasis has shifted to the e-word: e-business, e-culture, e-government, e-everything. Such attempts suggest that the revolution is merely about a simple conversion process, as if it were simply a question of scanning and storing. As some critics have been quick to point out there are also new problems of preserving access to these electronic materials as the technology evolves with remarkable speed. These are obviously important matters and constitute immediate challenges. Accordingly they are being addressed on a number of fronts.
This paper has claimed that the challenges posed by the digital revolution are much more profound and wide reaching than is generally recognized. Three main areas were outlined: expression, conservation and communication. A number of important and exciting initiatives were cited as examples to confirm that tremendous progress has been made in the past decades. At the same time a number of challenges were outlined. We suggested that three fundamental steps might be taken: to develop 1) distributed repositories in the form of digital libraries, museums and archives; 2) virtual reference rooms which would integrate authority files and reference materials (classification systems, dictionaries, encyclopaedias, catalogues and bibliographies) and 3) a virtual agora which would enable persons to share collaboratively the resources provided by the above and to begin relating new personal and collaborative knowledge with the enduring or perennial knowledge of memory institutions. Together these three initiatives would lead to a Distributed European Electronic Resource (DEER). This would provide a framework wherein the larger challenge of a gradual reorganization of the whole of knowledge could become a practical reality. As this occurs, the full scope of the long term digital revolution that has begun will become manifest.
Maastricht, 9 March 2005
Since the addresses of websites are constantly changing, their titles are for the most part included such that these can be used in Google if the address changes.
 Computer 50
http://www.computer50.org/index.html. It is very likely that there were earlier experiments at Bletchley which was a secret defence facility.
 Hyper Text Markup Language
 Hyper Text Transfer Protocol
 For an introduction to historical aspects of the Internet see the author’s American Visions of the Internet at http://www.sumscorp.com/new_media.htm under Internet. For a more extensive discussion see the author’s Augmented Knowledge and Culture, Calgary, University of Calgary Press, 2005 (in press).
 Digicult Has also produced a useful survey of Core Technologies for the Cultural and Scientific Sector, January 2005.
 Encyclopaedia of New Media
 Artnouveau Network http://www.zgdv.de/zgdv/departments/z2/Z2Projects/artnouveau/index_html_en
Building a virtual art community
The pricing of Maya remains unchanged from Maya 6 - Maya Complete 6.5 is $1999 and Maya Unlimited is $6999. Upgrade pricing is $899 for Maya Complete and $1249 for Maya Unlimited. Both Maya Complete and Unlimited are available on the Windows, IRIX, and Mac platforms. Cf. Alias Announces Latest Version of Maya, 31 January, 2005.
 NCSA Virtual Director (by Donna Cox et al.) http://virdir.ncsa.uiuc.edu/virdir/
AATA Online, Getty Conservation Institute
Canadian Heritage. Conservation Sources
http://www.cci-icc.gc.ca/links/cons-sour_e.shtml. Such institutions are emerging around the world. Some such as Heritage Conservation Net (http://www.heritageconservation.net/) have grand names that are not always matched by the universal competence that such names might suggest.
 Piero della Francesca
Cf. http://apple.csgi.unifi.it/~restauro/conservazione/pix.html which also alludes to this work.
 Building a Virtual Art Community
“BYU Adapts Space-age Technology to Study Ancient Documents,” BYU News, 6 February 2002. http://www.et.byu.edu/news_imaging.htm
 Papyrology and the dating of the New Testament
 Bill Gates, Content is King (1/3/96):
ICOM Virtual Library Museum Page
 For example, if one types in Leonardo Last Supper Copies one arrives at one important copy, namely, at Tongerloo. If one types in Leonardo Last Supper there are 1040 hits. A Search within results still produces only one copy. Typing Last Supper, produces 15,600 hits. A search within results now produces 15 hits but still results in only 1 serious copy. If one knows that there is a copy in London and types in Last Supper London there are 51 hits of which hit number 7 takes one there while 50 hits miss the mark. If one knows precisely where the copy is and types in Last Supper Burlington House, one hit takes one directly to the image. It would help enormously if such image databanks were accessible chronologically, geographically and by medium.
 The Uffizi was one of the pioneers in developing an Internet website. In March 2005, the traditional official website (http://www.uffizi.firenze.it/welcome.html) no longer worked. There was a new site as part of the Polo Fiorentino (http://www.uffizi.firenze.it/musei/uffizi/default.asp#accesso) which led one to a search engine for the collections, but this gave only a list of titles in March 2005. Then there was a Virtual Uffizi site (http://www.uffizi.firenze.it/musei/uffizi/default.asp#accesso), with an author list that theoretically provided images but did not always do so. For instance, there was no image for Luca Signorelli, Last Supper in March 2005. Virtual Uffizi claimed that there 4 works by the Signorelli in the collection,. The Uffizi site claimed there were 7.
 Réunion des Musées Nationaux
 Early English Books Online : The Holy Grail of Online Resources?, 2 September 2004.
 Gallica at BNF: http://gallica.bnf.fr/
Gallica propose un accès à 70 000 ouvrages numérisés, à plus de 80 000 images et à plusieurs dizaines d'heures de ressources sonores. Cet ensemble constitue l'une des plus importantes bibliothèques numériques accessibles gratuitement sur l'Internet.
Les fonds de Gallica sont extraits de la bibliothèque
numérique de la BnF. Ils ont été choisis de manière
à dessiner une bibliothèque patrimoniale et
encyclopédique. Cette collection rassemble des éditions
prestigieuses, dictionnaires et périodiques. Elle concerne de nombreuses
disciplines telles l'histoire, la littérature, les sciences, la
philosophie, le droit, l'économie ou les sciences politiques.
Si ces fonds privilégient la culture francophone, ils offrent aussi nombre de classiques étrangers en version originale ou en traduction. Cet ensemble de romans, d'essais, de revues, de textes célèbres et d'œuvres plus rares est ici réuni pour permettre à tout lecteur, du curieux au bibliophile, du lycéen à l'universitaire, d'approfondir la connaissance d'une époque dans ses aspects politiques, philosophiques, scientifiques ou littéraires.
On trouvera dans Gallica majoritairement des imprimés numérisés en mode image (1). Néanmoins, une collaboration avec l'Institut National de la Langue Française du CNRS et avec les éditeurs multimédias Bibliopolis, Acamédia et Champion a permis de mettre en ligne 1250 ouvrages en mode texte(2).
Catalogue des fonds culturels numérisés
See the article by Leonard Will, Museum Resources and the Internet, 1996
 Alas this experiment which was funded by the European Commission no longer functions.
 Virtual Museum of Canada
Facing the Challenges. The Lisbon Strategy for growth and employment. Report form the High Level Group chaired by Wim Kok, November 2004.
 Giorgio Ruffolo, The Unity of Diversities. Cultural co-operation
in the European Union,Edited by The Parliamentary Group of the PSE,
European Parliament, Florence: Pontecorboli, 2002.
Directorate General for Education and Culture http://europa.eu.int/comm/dgs/education_culture/index_en.htm
Digital Heriatge and Cultural Content
 The European Library
One serious limitation of this project is that it has effectively ignored the work of Bibliotheca Universalis, GABRIEL or the important contributions of national catalogues.
Information Society and media. Directorate General. http://europa.eu.int/comm/dgs/information_society/directory/index_en.htm#Dir%20E
Directorate E: Content http://europa.eu.int/comm/dgs/information_society/directory/index_en.htm#Dir%20D
 European Library
 Dossiers thématiques
 See the author’s Towards a Semantic Web for Culture.
 Until only a few years ago there was a fashion to photograph handwritten catalogue cards rather than attempting to create new digital versions. Admittedly some of these decisions were influenced by economic and other limitations at the time.
 Bibliotheca Ambrosiana search under drawings under Leonardo da Vinci.
 For a further discussion see the study of Dr Suzanne Keene and Francesca Monti, Distributed European Electronic Resource, Report for E-Culture Net, London, 2004. This should soon be available once more under E-Culture Net at www.eculturenet.org