<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jones, Jasmine</style></author><author><style face="normal" font="default" size="100%">Merritt, David</style></author><author><style face="normal" font="default" size="100%">Ackerman, Mark S.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">KidKeeper: Design for Capturing Audio Mementos of Everyday Life for Parents of Young Children</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">audio</style></keyword><keyword><style  face="normal" font="default" size="100%">candid</style></keyword><keyword><style  face="normal" font="default" size="100%">capture</style></keyword><keyword><style  face="normal" font="default" size="100%">children</style></keyword><keyword><style  face="normal" font="default" size="100%">curation</style></keyword><keyword><style  face="normal" font="default" size="100%">digital memento</style></keyword><keyword><style  face="normal" font="default" size="100%">family memory</style></keyword><keyword><style  face="normal" font="default" size="100%">memorabilia</style></keyword><keyword><style  face="normal" font="default" size="100%">memory artifact</style></keyword><keyword><style  face="normal" font="default" size="100%">parents</style></keyword><keyword><style  face="normal" font="default" size="100%">tangible</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">Complete</style></url></web-urls></urls><pages><style face="normal" font="default" size="100%">1864–1875</style></pages><isbn><style face="normal" font="default" size="100%">978-1-4503-4335-0</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Children grow up fast. Many parents want to capture the candid, fleeting moments of their young children&#039;s lives to treasure later, but these moments are difficult to anticipate and to capture without disruption. Current technologies to address this are limited to indiscriminately capturing everything, or are dependent on parents&#039; presence and prescience to initiate capture and manually record the moment. To address these limitations, we introduce KidKeeper, a toy-like system to capture, select, and deliver everyday family memories with minimal effort and disruption to family life. It uses an innovative approach to capture that we call &quot;integrated capture,&quot; that combines previous attempts to continuously capture family memories with the practice-oriented approach of &quot;unremarkable computing&quot; to embed capture capabilities unobtrusively into everyday activities. In our study, we explore how technologies like KidKeeper mediate and align the different interests and values of various family members, namely parents who want precious moments and children who want to play, towards accomplishing a family goal to capture memories of everyday life.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Merritt, David</style></author><author><style face="normal" font="default" size="100%">Jones, Jasmine</style></author><author><style face="normal" font="default" size="100%">Ackerman, Mark S.</style></author><author><style face="normal" font="default" size="100%">Lasecki, Walter S.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Kurator: Using The Crowd to Help Families With Personal Curation Tasks</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">crowdsourcing</style></keyword><keyword><style  face="normal" font="default" size="100%">curation</style></keyword><keyword><style  face="normal" font="default" size="100%">digital audio</style></keyword><keyword><style  face="normal" font="default" size="100%">digital curation</style></keyword><keyword><style  face="normal" font="default" size="100%">hybrid intelligence</style></keyword><keyword><style  face="normal" font="default" size="100%">mixed-expertise</style></keyword><keyword><style  face="normal" font="default" size="100%">personal curation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">Complete</style></url></web-urls></urls><pages><style face="normal" font="default" size="100%">1835–1849</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;People capture photos, audio recordings, video, and more on a daily basis, but organizing all these digital artifacts quickly becomes a daunting task. Automated solutions struggle to help us manage this data because they cannot understand its meaning. In this paper, we introduce Kurator, a hybrid intelligence system leveraging mixed-expertise crowds to help families curate their personal digital content. Kurator produces a refined set of content via a combination of automated systems able to scale to large data sets and human crowds able to understand the data. Our results with 5 families show that Kurator can reduce the amount of effort needed to find meaningful memories within a large collection. This work also suggests that crowdsourcing can be used effectively even in domains where personal preference is key to accurately solving the task.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Merritt, David</style></author><author><style face="normal" font="default" size="100%">Hung, Pei-Yao</style></author><author><style face="normal" font="default" size="100%">Mark S. Ackerman</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Treem, Jeffrey W.</style></author><author><style face="normal" font="default" size="100%">Leonardi, Paul M.</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Expertise Finding: A Socio-Technical Design Space Analysis</style></title><secondary-title><style face="normal" font="default" size="100%">Expertise, Communication, and Organizing</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">expertise finding</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2016</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">MISSING_URL_ABSTRACT</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Oxford University</style></publisher><pub-location><style face="normal" font="default" size="100%">New York</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language></record></records></xml>