Abstracts and Presenters

Opening Keynote

Martin Hofmann-Apitius Fraunhofer Institute for Algorithms and Scientific Computing SCAI

martin hoffmann“Innovation beyond text – the future of scientific communication
In the (near) future, science will need more than scientific narratives frozen in textual information sources. Advances in (natural) science and medicine will be largely communicated in models, rather than casting new knowledge into scientific prosa. Novelty of thoughts and concepts will be rationalised (something, that would obviously be of great advantage in all the discussions about plagiarism) and deltas between knowledge models generated some years ago and update, recent models will be easily navigated and visualised.
Models will not only be based on a knowledge representation language capturing causal and correlative relationships (e.g. OpenBEL, the biological expression language which can be efficiently used to capture knowledge essential for computable models of a disease). Models such as the (future) Lindsay Virtual Human (intro video) will provide not only access to information mapped to a 4D representation of the human body, but will also support in silico experimentation (e.g. simulation of physiological function). The average, "normal" patient "out there in the wild" will be able to contribute to science by providing "real world data" (example epileptic seizure video). The knowledge base of the future is not a library as we used to know it; it is a combination of abstract representations of the knowledge domain (e.g. ontologies), enriched with data, text, pictures and movies that allow to perform complex mining tasks and simulation on this knowledge domain. Participation and contribution of scientists and non-scientists will be largely facilitated and as much as we have seen the power of the (knowledgeable) crowd with Wikipedia, we will see the power of patient interest groups and other stakeholder communities in the area of knowledge curation and real world data provisioning.

Martin Hofmann-Apitius holds a PhD in Molecular Biology and worked for more than 10 years in experimental molecular biology.
The screening for novel genes involved in tumour metastasis lead him into the area of functional genomics and subsequently to applied bioinformatics. Martin Hofmann-Apitius has experience in both, academic and industrial research. Since 2002 he is leading the Department of Bioinformatics at the Fraunhofer Institute for Algorithms and Scientific Computing (SCAI) in Sankt Augustin (Germany), a governmental non-profit research institute. In July 2006 he has been appointed as a Professor for Applied Life Science Informatics at Bonn-Aachen International Center for Information Technology (B-IT). Martin Hofmann-Apitius is (co-) author of more than 100 scientific publications.
The research activities at the Department of Bioinformatics at Fraunhofer SCAI focus on:

  • Automated methods for the extraction of relevant information from unstructured information sources such as journal publications, patents and web-based sources
  • Extraction of chemical information from chemical structure depictions
  • Knowledge-based modelling of neurodegenerative diseases
  • Mining in real-world data (social networks, patient forums, electronic patient records)
Session 1 – Strategy and Policies

Jan Brase DataCite

jan brase“Beyond text: New roles for libraries in the 21st century
Scientific information today can be found in various forms and content types: grey literature, data sets, videos, 3D models, etc. Nevertheless most library services are still based on textual information. To face these challenges the library community must now rise to the challenges of the so-called 4th paradigm by developing solutions together with the scientific community to make any kind of scientific information available, citable, sharable, linkable and usable. In the last years the German National Library of Science and Technology (TIB) has developed various tolls and services to handle chemical structures, 3d models, research data and other types of information.

Jan Brase has a degree in Mathematics and a PhD in Computer science. His research background is metadata, ontologies and digital libraries. From 2005 to 2012 he was head of the DOI-registration agency for research data at the German National Library of Science and Technology (TIB). Since 2009 he is furthermore Managing Agent of DataCite, an international consortium with 17 members from 12 countries. DataCite was founded in December 2009 and has set itself the goal of making the online access to research data for scientists easier by promoting the acceptance of research data as individual, citable scientific objects. Jan is Chair of the International DOI foundation (IDF), Vice-President of the International Council for Scientific and Technical Information (ICSTI) and Co-Chair of the recently established CODATA Data Citation task group. He is author of several articles and conference paper on the citation of data sets and the new challenges for libraries in dealing with such non-textual information objects.

Todd Carpenter National Information Standard Organization (NISO)

todd carpenter“Standards issues related to moving off the page”
Standards for distributing textual information have been around so long, we don't even think of them as standards any longer. As scholarly communication moves away from strictly textual we need to develop new standards to ensure efficient distribution, discovery and preservation. This presentation will outline the need for a conceptual model that describes data exchange, highlighting metadata and identification gaps. It will also include the recently published industry Recommended Practice for Online Supplemental Materials produced by NISO and NFAIS.

Todd Carpenter is Executive Director of the National Information Standards Organization (NISO), a non-profit industry trade association that fosters the development and maintenance of standards that facilitate the creation, persistent management, and effective interchange of information used in publishing, research, and learning. Throughout his career, Todd has served in a variety of roles with organizations that connected the publisher and library communities. Prior to joining NISO, Todd had been Director of Business Development with BioOne. He has also held management positions at The Johns Hopkins University Press, the Energy Intelligence Group, and The Haworth Press.

Jill Cousins The European Library/Europeana

jill cousins“Europeana: Creating a digital resource for researchers”
Europeana and The European Library have been working for several years to create an interoperable resource of library, museum, archive and audio visual material. This has now reached a critical mass and we have started the process of distribution of this standardized data. This means placing the content in the path of the user, via API's or creating channels such as the proposed Europeana Research. The talk will summarize activity to date and look at the mountain we still need to climb but also ask for feedback on how useful a web resource dedicated to the researcher might be.

Jill Cousins is the Executive Director of the Europeana Foundation, responsible for Europeana.eu, and Director of The European Library. She is on the Board of Globethics and advises on the development of other digital libraries. She has many years experience in web publishing, which are now being applied to the libraries and the cultural heritage arenas. Her past experience includes the commercial publishing world as European Business Development Director of VNU New Media and scholarly publishing with Blackwell Publishing, running their online journals service. Prior to publishing, she had a variety of marketing and research careers in the information field. These ranged from being the Marketing and Event Director for Learned Information (Online Information) to managing her own research company, First Contact.

Guido F. Herrmann Thieme Publishing Group

guido hermann“Non-textual information: Contributions by scientific publishers
The talk will present the importance of non-textual information for scientific publishers. First we will give an overview of what are the perspectives of scientific publishers on research data and how they can add value to scientific publications. We will also show the industries activities in this arena over the last few years. Next we propose measures how publishers can further contribute and welcome a discussion with the conference attendees. In the second half of our talk we will show specific examples how Thieme is incorporating non-textual information into medical and chemical publications.

Guido F. Herrmann has a 16 years background in the publishing and information industry and currently works as a Managing Director at Thieme Publishers (www.thieme-chemistry.com). Since 2008 he is a member of the STM Copyright Committee of the International Association of STM Publishers. Since 2005 he acts as the Chairman of the Scientific Advisory Board of the Fachinformationszentrum Karlsruhe, Germany. He is also a member of the Scientific Advisory Board of the Technische Informationsbibliothek Hannover, Germany. Guido F. Herrmann presents frequently at international conferences and received academic training in chemistry (Ph.D.) and business administration (M.B.A.).

Co-author Eefke Smit is the Director of Standards and Technology at the International Association of STM Publishers and coordinates the activities of the STM Future Lab Committee, the annual STM Innovations Seminars and webinars, and the STM Research Data Working Group. Representing STM in several industry-wide standards organizations, projects and working groups, she serves among others on the Board of international standards organisation EDItEUR, the Board of the digital preservation coalition Alliance for Permanent Access, the Board of research data network DRYAD, and on advisory panels such as the NISO business working group on Supplementary Journal Information, the DataCite working group on Certification Standards for Data centers, and the CoData-ICSTU group for Data Citation. She also is the STM participant in the EU-cofunded projects PARSE.Insight and APARSEN (both on digital preservation) and ODE (Opportunities for Data Exchange).

Her professional background includes responsibility for the development of several successful scholarly information products, among which ScopusTM, ScirusTM and ScienceDirectTM. She has also spent many years in the print world of academic publishing in areas such as Physics, Astronomy, Engineering, Computer Science and Mathematics. After her university studies in the mid-eighties, she started her working life as a newspaper journalist for the Dutch newspaper NRC Handelsblad writing on research and high-tech developments. Next to her STM work, Eefke works as an independent consultant in new business development, e-publishing and reproduction rights (IPRO). Recent reports and articles cover topics such as digital preservation, research data and publications, and text- and data mining.

Session 2 – Best Practices

Puneet Kishor Creative Commons

puneet kishor“CC 4.0, a new generation of licenses for all kind of data”
Since CC's launch in 2002, it has versioned its core license suite three times, the last (3.0) in early 2007. The time is right to update the CC licenses so they better reflect updates to the law, technical realities, and social norms since the last release. Over the last year, CC has engaged license experts, stakeholders, and open communities around the world in this process. Now nearing completion, version 4.0 is a tremendous opportunity to ensure the license suite is ideally crafted to further CC's vision and mission over the next decade(s). Some big goals of 4.0 include internationalization, interoperability, durability, addressing sector-specific needs (such as public sector information), and support for existing adoption models. We'd like to share the 4.0 progress thus far with you all, and focus on the specific aspects that relate to science, data and other non-textual information.

Puneet Kishor likes to write programs that manipulate, analyze and visualize information from large datasets, but he worries what would happen to those data and results 50 or 100 years from now. He is unable to read the two decades old original digital version of his M.S. thesis even while he can read Rāmāyaṇa and Mahābhārata written a few thousand years ago. This, in part, motivates his passion for permanently free and open access to scientific data. Puneet's main focus at Creative Commons is on science data policy. His extra-curricular engagements include developing tools and techniques for management and analysis of earth sciences data at the University of Wisconsin-Madison, and research at the National Academies/CODATA on policy issues related to citation and licensing mechanisms for digital scientific information. Any spare cycles are devoted to jazz and beer, preferably concurrently. Puneet has worked, in order, at a rural development NGO in New Delhi, the World Bank in Washington DC, a for-profit GIS consulting company, and the University of Wisconsin-Madison. He is happy to be back to his NGO roots at CC.

Oliver Koepler German National Library of Science and Technology (TIB)

oliver-koeppler“VisInfo – a visual search system for scientific research data”
In contrast to text documents, research data, with its graphic visualizations, places different demands on indexing, searchability and presentation in the information retrieval process. The aim of the VisInfo project is the development and prototypical implementation of innovative approaches for interactive, graphical access to research data, to present it in the information retrieval process and make it searchable in the best way possible. In the project, we have studied and developed further data analysis processes as well as visual search systems, with their prototypical implementation being evaluated for research data from earth and environmental sciences. In this talk we will present the general challenges of a visual search approach in research data and the VisInfo prototype as outcome of the project.

Oliver Koepler holds a PhD in Chemistry. As a project manager of several digital library projects at the German National Library of Science and Technology (TIB) in Hannover he has over 7 years experience in digital library research and development. Currently he is coordinator for the development of all scientific portals of the TIB.

Brian McMahon International Union of Crystallography

brian mcmahon“Bringing crystal structures to life in the scientific literature”
Crystallographic research frequently involves determination of three-dimensional crystal lattices or molecular structures by diffraction techniques. The adoption of a standard computer-interpretable data description by the International Union of Crystallography (IUCr) has allowed the development of new workflows for its scientific journals. These result in a tight integration of scientific articles with the actual data and metadata that underpin the results that they discuss. Data validation is an integral (and largely automatic) part of the peer review procedure for IUCr journals; the structural models can be visualised directly while reading the published article, and can be used as the basis for interactive database searches and queries. The journals also link and transfer data to curated domain-wide structural databases such as the Protein Data Bank, Cambridge Structural Database and Inorganic Crystal Structure Database. For short structural articles, the actual data files, annotated with the text of the discussion, actually form the submission medium, and tools have been created to allow easy authoring and the creation of interactive molecular graphics.

Brian McMahon has worked for the International Union of Crystallography since 1986, initially as a technical editor and subsequently in the development of computerised applications in publishing, including journal production workflow, web design and content provision, data checking and evaluation, and the provision of interactive services to journal authors and readers. He has been involved for many years in the IUCr's programme to develop CIF, a standard information exchange and archival framework. He served as CODATA Representative for the IUCr from 2002 to 2012, and has also in recent years attended several ICSTI events.

Jean Archambeault National Research Council Canada

jean archambeault2“Uses and challenges of visualizations for competitive and technological intelligence” 

Graphical visualizations are increasingly used to support decisions for research investment by enhancing the ability to identify emerging R&D trends, key players, and weak signals of potentially disruptive technologies. Visualization products and services produced to meet specific organizational decision needs, not acquired from published sources, require a specialized set of competencies and tools within the organization, and present a number of production, usability, maintenance and life-cycle management challenges. To meet growing expectations, visualisations need to be simple, interactive, updated frequently, and accessible from distributed network environments. Professional advisory support, communication of analysis results, or retrieval capabilities in constrained time frames create particular pressures on service delivery. In this presentation, examples of contextual visualizations and systems for competitive and technological intelligence will be demonstrated and major challenges discussed.

Jean Archambeault is currently Lead of the Strategic Technical Insights program and responsible for the development and delivery of Competitive and Market Intelligence services at the National Research Council Canada (NRC). Since 2002, he has developed and managed the delivery of strategic analysis services based on text mining, visualization tools and tech mining methodologies to NRC and external partners in the Canadian innovation system.

Session 3 – Innovation and Research

Harald Sack Hasso Plattner Institute (HPI)

harald sack“Context-driven semantic multimedia search”
Video and multimedia data have become the predominant information on the World Wide Web. To cope with the ever growing amount of multimedia data on the web search engines have to open up the media content for search and retrieval. Automated multimedia analysis technologies such as, e.g. automated speech recognition, video OCR, or visual concept detection help to open up large scale multimedia repositories although the achieved analysis results often are error prone and unreliable. Semantic analysis considers the multiple (mostly text-based) metadata streams from automated analysis and constructs a semantic context to enable understanding the media content. Thus, semantic analysis enables the improvement of metadata reliability by evaluating the plausibility of the semantic assumptions. In addition, semantically annotated multimedia data enables semantic and exploratory search to open up new ways of accessing multimedia repositories.

Harald Sack is Senior Researcher at the Hasso Plattner-Institute for IT-Systems Engineering (HPI) at the University of Potsdam. After graduating in computer science at the University of the Federal Forces Munich Campus in 1990, he worked as systems/network engineer and project manager in the signal intelligence corps of the German federal forces. In 1997 he became associated member of the graduate program 'mathematical optimization' at the University of Trier and graduated with a PhD thesis on formal verification in 2002. From 2002–2008 he did research and teaching as a postdoc at the Friedrich-Schiller-University in Jena and since 2007 he has a visiting position at the HPI, where he now is head of the research group 'Semantic Technologies and Multimedia Retrieval'. His areas of research include semantic web technologies, multimedia analysis and retrieval, knowledge representations, machine learning and semantic enabled retrieval. Since 2008 he also serves as general secretary of the German IPv6 council.

Rob Fatland Microsoft Research

rob fatland“Insight in six dimensions: An approach to visualizing data using Layerscape
Data visualization and cyberinfrastructure are two important technology pillars in support of data-intensive science, i.e. they are two means of coping with our self-inflicted Data Deluge. With remarks incorporated on the latter (cyberinfrastructure) we discuss here the former:
A visualization engine (http://layerscape.org) that leverages consumer-driven graphics capabilities of Personal Computers. This visualization engine is presented from motivating principles as a working implementation that incorporates the following:

  • Support for time series data
  • Support for data in 3D, for raster overlays, for geometric rendering, and for marker rendering
  • Support for modestly large datasets (1,000,000 points)
  • Support for free-perspective inspection and choreographed storytelling
  • Cloud service support for data sharing and collaboration
  • Excel-embedded data preparation/injection into the visualization system
  • Emphasis on service-oriented data access
  • No cost to the researcher beyond time to learn: All software and services described are freely available as research tools

Our objective is to share Microsoft technology in the context and community of serious earth system science research. We see tremendous value and importance in this field and consequently feel it is a win-win sociological proposition to contribute tools and technologies that could drastically reduce 'time to insight' from our increasingly complex and heterogeneous data harvest. The philosophical basis for this thematic work is further elaborated in the online-available book The Fourth Paradigm.

Rob Fatland is a Research Program Manager (geoscience emphasis) at Microsoft Research Connections where he focuses on applications of technology to challenges in environmental informatics and data visualization. His earlier work has included research in glacier dynamics using spaceborne radar interferometry and wireless sensor network development for use in environmental monitoring applications. More recently he contributed to the release of a hydrology data search engine called SciScope as a Microsoft Research contribution to the community discussion of how to cope with the 'Fourth Paradigm' of data intensive science. His current work is divided between developing and promoting the free research tools provided by MSR at http://layerscape.org and applications of machine learning and visualization tools to earth system science, particularly in the investigation of the biogeochemistry of land-ocean coupling.

Dieter Fellner Fraunhofer Institute for Computer Graphics Research IGD

dieter fellner“3D documents”
Electronic publishing and digital library systems have, without doubt, radically changed the way of producing and exchanging (scientific) documents as well as the way we all search for, retrieve and consume information in a broad sense. Consequently, the user community, at least in many application domains, is ready for the next logical step: true multimedia documents.

To illustrate this development the talk will address the handling of 3D multi-media information entities with which users will interact with as naturally as they do with textual objects today. The use of these multimedia information entities – which I prefer to call “3D Documents” will demand answers and solutions to the open issues of efficient creation, markup, indexing, retrieval, dissemination, and long-term preservation for IT systems, seen from the technical side, and for (Digital) Libraries, Archives and Museums, seen from the content providers side.

Dieter Fellner is a professor of computer science at the Technical University of Darmstadt, Germany, and the Director of the Fraunhofer Institute of Computer Graphics (IGD) at the same location. Previously he has held academic positions at the Graz University of Technology, Austria, the University of Technology in Braunschweig, Germany, the University of Bonn, Germany, the Memorial University of Newfoundland, Canada, and the University of Denver, Colorado. He is still affiliated with the Graz University of Technology where he chairs the Institute of Computer Graphics and Knowledge Visualization he founded in 2005.

Dieter Fellners’ research activities over the last years covered algorithms and software architectures to integrate modeling and rendering, efficient rendering and visualization algorithms, generative and reconstructive modeling, virtual and augmented reality, graphical aspects of internet-based multimedia information systems and digital libraries. In the latter field he has initiated/coordinated the first strategic initiative on ‘General Documents’ (funded by the German Research Foundation DFG, 1997-2005) followed by a DFG-Research Center on ‘Non-Textual Documents’ (2006-2011). In the areas of computer graphics and digital libraries Dieter Fellner is a member of the editorial boards of leading journals and a member of the program committees of many international conferences and workshops.

Remco Veltkamp Utrecht University, Department of Information and Computing Sciences

remco veltkamp“Generic multimedia indexing and retrieval approaches”
Because of the growth of the size, number, and complexity of versatile digital libraries, it gets increasingly pressing to invent generic approaches for multimedia indexing and retrieval. Some of the necessary generic aspects are segmentation and feature extraction and indexing of from the various media. We will discuss conceptual issues as well as specific instances of features, distance measures,
matching algorithms, indexing data structures, and search algorithms. These will be illustrated with cross-media solutions, covering images, music, and 3D objects.

Remco Veltkamp is full professor of Multimedia at Utrecht University. His research interests are the analysis, recognition and retrieval of, and interaction with, music, images, and 3D objects and scenes, in particular the algorithmic and experimentation aspects. He has written over 150 refereed papers in reviewed journals and conferences, and supervised 15 PhD theses. He was director of the national project GATE - Game Research for Training and Entertainment.

Session 4 – Digital Preservation

Thomas Bähr German National Library of Science and Technology (TIB)

thomas baehr“Digital preservation of AV-materials in a library context – challenges, strategies, approaches
Digital preservation processes differ in detail depending on a variety of factors. Organizational matters such as retention periods or an overarching archiving mandate influence policies, which in return form the basis for preservation action. On a technical level preservation processes are directly influenced by the complexity of the material but also by the availability of tools to analyze and treat the material. All preservation action must furthermore be in-line with the intended usage of the material.
The Goportis institutions preserve their digital holdings in a cooperatively operated digital preservation system. While the digital preservation system may be considered the technical framework of a preservation workflow for a specific collection, this workflow must be extended by format-specific tools and supported by a variety of institutional decisions and actions.
The presentation will highlight challenges, strategies and approaches of a digital preservation workflow for non-textual materials using the example of AV-materials at TIB. It will show how requirements of a memory institution influence preservation decision and touch on state-of the art practises in digital preservation of AV materials.

Thomas Bähr studied Civil Engineering at the University of Applied Sciences Berlin and Organization Studies at University of Hildesheim. After having worked as Deputy Head of the Editorial Department at Springer science+business media he joined the German National Library of Science and Technology (TIB) and the Goportis Digital Preservation Team as an preservation manager at in 2009.

Herbert Grüttemeier INIST-CNRS

herbert-gruettemeier“The CNRS engagement in research infrastructures for digital humanities – archiving and dissemination”
The French CNRS (National Center for Scientific Research) is involved in almost all scientific fields, including Humanities and Social Sciences (HSS). Within the HSS, transition to digital humanities has been on the CNRS agenda for more than 10 years, leading in particular the creation of so-called very large-scale research infrastructures (TGIR). The presentation will provide a closer look at two of these TGIRs, TGE Adonis and CORPUS-IR, at related developments and the ways in which non-textual information is handled in these facilities. The goal of Adonis is to provide three types of services to the HSS communities: preservation, processing and dissemination of digital objects produced by the research labs. Adonis is working closely with CINES, center for long-term preservation in French higher education and research, and has a role of coordination of the Centers of digital resources, which have been created by CNRS since 2005. A special focus will be on the ISIDORE platform, a unique access point to various kinds of resources, using state-of-the-art data linking and enrichment techniques. It will finally be mentioned how non-textual information is also finding its way into the HAL central open archive run by CNRS.

Herbert Grüttemeier is project manager and head of international relations at the Institute for Scientific and Technical Information (INIST), a unit of the French CNRS. He holds a doctorate degree in mathematics from University of Marseille. Prior to his present position, Herbert was successively involved in INIST's two main traditional activities, database production and document delivery, as well as in library management. In his current function he has participated in the INIST actions that aim at promoting Open Access and innovative scholarly communication practices for scientific information and data. He has been getting his institution involved in the DataCite initiative. His work has also focused on the management of co-operative projects for developing information services in Third World countries. Herbert is an active member in several international associations and committees and has served as president of the International Council for Scientific and Technical Information (ICSTI).

Jakob Beetz Eindhoven University of Technology

jakob beetz“Digital preservation of information models for the build environment – requirements, challenges and approaches in the FP7 DuraArk project.”
Long-term preservation of information about artifacts of the built environment is crucial to provide the ability to retrofit legacy buildings, to preserve cultural heritage, to ensure security precautions, to enable knowledge-reuse of design and engineering solutions and to guarantee the legal liabilities of all stakeholders (e.g. designer, engineers).

With the recent paradigm shift in architecture and construction from analog 2D plans and scale models to digital 3D information models of buildings, long-term preservation efforts must turn their attention to this new type of data. Currently, no existing approach is able to provide a secure and efficient long-term preservation solution covering the broad spectrum of 3D architectural data, while at the same time taking into account the demands of institutional collectors like architecture libraries and archives as well as those of the private sector including building industry SMEs, owners, operators and public stakeholders.

In this presentation, an overview of the requirements and challenges of the multi-faceted problem domains of digital preservation in the built environment will be given. As a contribution to possible solution approaches of these challenges, the roadmap of the FP7 ICT-2011-9 project “DuraArk – Durable Architectural Knowledge” will be presented. The intended outcomes of the interdisciplinary working groups within this projects will address a number of these problems spanning voluminous sets of low-level point-cloud data from laser scans to semantically consistent descriptions of heterogeneous building products and their long-term preservation.

Dipl.-Ing. Jakob Beetz, PhD is a tenured assistant professor at the Department of the Built Environment of the Technical University of Eindhoven, the Netherlands. He graduated from Bauhaus-University Weimar/Germany in architecture and received his PhD for his work on “Facilitating distributed collaboration in the AEC/FM sector using Semantic Web Technologies” including an OWL representation of the IFC model and strategies for ontological lifting of the International Framework for Dictionaries (ISO 12006). He. His research focus is in the areas of Building Information modeling on schematic and instance levels, Model Servers and the Sematic Web. He has frequently published in journals and conference proceedings and frequently serves as a reviewer for various journals.

Laura Molloy University of Glasgow, Humanities Advanced Technology & Information Institute (HATII)

“The Jisc managing research data programme: institutional approaches to research data preservation from the UK”
The Jisc Managing Research Data programme aims to improve the management of research data in a variety of UK universities by developing technical infrastructure, raising awareness and developing skills. As a member of the programme team, I will describe some of the current drivers for the effective management and preservation of research data in UK universities, and the response provided by the JISC Managing Research Data programme. The presentation will touch on the structure and direction of the programme, lessons learned from its approach, and our attempt to quantify the benefits of this work in a structured way.

Laura Molloy is a researcher at the Humanities Advanced Technology and Information Institute (HATII) at the University of Glasgow, Scotland. Laura's research interests include, inter alia, the articulation of digital curation principles and approaches particularly to non-science audiences. She works across the Jisc Managing Research Data programme as an 'evidence gatherer' to elicit, synthesise and analyse specific qualitative and quantitative evidence of the benefits of Jisc research and development projects across the UK. She is also responsible for co-leading development of the DigCurV project's curriculum framework for vocational digital curation education in the cultural heritage sector: a framework for the review, comparison, description and development of digital curation vocational training across the EC. She was a member of the Data Management Skills Support Initiative, which analysed Jisc Managing Research Data programme (2009-11) training materials projects and provided a set of recommendations for future research data management training. Other experience includes delivery of the outreach and training programme of the FP6 PLANETS project. She is a co-convenor of the Theatre and Performance Research Association Working Group on Documentation and a regular speaker on digital curation skills.



larger mapview

leibniz map-icon Leibnizhaus
rathaus-map-icon "Old Town Hall"
for Get Together