In order to better integrate my blog with my website, better manage comment spam, and reduce my dependence on Google, this blog has moved to In order to avoid broken links I won't be deleting content from here, but no new content will be added, so please update your bookmarks and feeds.

Tuesday, 20 November 2012

New Memory Palaces and the Sublime #ndf2012

Piotr Adamczyk, @adamczyk, Google Art Project
Piotr has been exploring the possibilities for exchange between practices in the sciences and evaluation techniques from the arts. Most recently he led development on the Google Art Project. Before that he held an analyst position with The Metropolitan Museum of Art. With a background in Mathematics and Computer Science, Piotr holds graduate degrees in Human Factors and Library and Information Science from the University of Illinois at Urbana-Champaign. Piotr has authored papers and organized workshops for Association for Computing Machinery conferences centred on human-computer interaction, and served as a Program Committee member for ACM Creativity & Cognition in 2007 and 2009. His recent work is focused on the use of open/linked data in cultural heritage institutions.

Shows images of museum content contrasting with museum data... Worked on Museum Data Exchange Project with OCLC

At one pointed had to scrape own website to get data sharable. Has used Yahoo Pipes. Tried this with a single object - what about with a whole collection? V&A infinite scroll. [Google, DuckDuckGo, etc do this - why not library catalogues and databases?] SFMOMA ArtScope lets you see an overview of the whole collection but still distancing, doesn't give you context or differences. SFMOMA Collections Online Visualization Tool - Beta makes it into a graph.

Got into Google Art Project as a Met employee. Now working on backend metadata and systems. 36102 artworks from 184 collections from 43 countries - 2 from New Zealand. Will take any nonprofit institution, copyright-free or -cleared content. Not replacing existing platforms but creating another one. Not just the art but also a streetview component currently in 55 galleries. (2m-tall trolley that someone walks behind and gallery says where to go.)

Adding features - In Google can do things quickly compared to "museum time". Eg user galleries, compare features to compare two artworks eg a sketch and a finished painting. Unplanned benefit of aggregation. Quickly added a Hangouts feature, so can take people through guided tour. Goggles - image recognition software for mobile devices to lead back to institution website.

"Memory Palace" (Wikipedia, WikiHow)

Lots of photos of exhibits here, and Flickr groups - What's in your bag?, Bookshelf project - musing about how we visualise collections of things.

Google Art Project can only give a sense of what's measurable, a sense of what institution has said is most significant. "But what we do well is we do everything at once."

Brings us back to copyright, he says, showing a streetview with one of the images blurred out. (There's someone who goes through this streetview and takes the blurred images and gets someone to paint it so that the blurred painting now actually exists!) There are also some glitches due to software/hardware images from the trolley trundling through. Some issues remaining but thinks still doing good stuff.

Q: What problems do people report with StreetView?
A: Visitors ask why they can't go into certain spaces (navigation is determined by institution); institutions report more technical problems.

Q: Showed us several examples of meta-art - would it be useful to articulate a new level of language to talk about this kind of art?
A: Big data's something we need to deal with which scientists have been looking at. When you start putting things together, need to have a different way of talking about collection. Language of curation and selection has to change. Trying to get metadata from different institutions to talk together is hard.

Q: Just used "big data" and "curation" and "selection in a single sentence. Can we select ("a person sifting through every day for ever") or do we just take everything ("the firehose")?
A: May need to go with the firehose. Can we expect people to sift, or machines, or...? May depend on how much meta info we need - if we want richness might need human intervention, to get closer to meaning. Machines can only do so much.

Q: How do we enable people to make meaning for themselves; how enhance engagement?
A: Each institution has very different reasons for joining the project. Some because everyone else was, some because they don't have own website, some trying to drive users back to their own site, some to make use of advanced features. How do we measure success? 50million visits in last six months, which we know is just looking at the objects. Does this mean more engagement?

Q: Can you talk about the Google Art Project's plans for opening up connections not just through screens?
A: When setting up background did work on converting data to a metadata standard and giving this back to institution. Less of half of institutions have given metadata (for whatever reason). So are being kept back from opening up as an API because only a few institutions could do something with it right now; but is something they're interested in.