Author Archives: jroberts335

AI Literacy through Embodiment and Co-Creativity

Image of three exhibit prototypes at the basis of this grant. Left: LuminAI, center: Knowledge Net; right: Creature Features.

Description

Although artificial intelligence (AI) is playing an increasingly large role in mediating human activities, most education about what AI is and how it works is restricted to computer science courses. This research is a collaboration between the TILES lab, the Expressive Machinery Lab (Dr. Brian Magerko, Georgia Tech), and the Creative Interfaces Research + Design Studio (Dr. Duri Long, Northwestern University) to create a set of museum exhibits aimed at teaching fundamental AI concepts to the public. In particular we aim to reach middle school girls and students from groups who are underrepresented in computer science.

Creature Features asks learners to build their own training dataset to teach an AI what a bird is. After choosing examples and non-examples, learners receive feedback on how well their AI is able to identify new birds.

This 4-year project is funded by the NSF Advancing Informal STEM Learning (AISL) program (NSF DRL #2214463). We are collaborating with the Museum of Science and Industry in Chicago to conduct focus groups, needs assessments, and pilot testing of exhibit designs based off our prior work.

Knowledge Net asks learners to use tokens to create a network that powers a chatbot. After uploading an image of their network to a custom website, learners can ask the chatbot questions.

This research will explore how embodiment and co-creativity can help learners make sense of and engage with AI concepts.

Publications

Image of digital city with tale town title

Tale Town

Description

Our project, Technology for Acquiring Language through Engagement (TALE), focuses on designing a gamified platform to help self-learners acquire foreign language skills, specifically non-native English speakers. 

Research question: How might we help intermediate to advanced English learners get more practice speaking their target language with others?

Motivations: Self-learners encounter a number of challenges when attempting to learn a language, including:

  1. Lack of depth in existing free learning materials
  2. Disconnect to native speakers and culture
  3. Struggles with loss of interest or feelings of burnout over time
  4. Difficulties practicing natural conversations

Why gamification? Research has shown that playful environments and gamified approaches can be powerful tools for engaging learners.

Project Team

  • Kyle Leinart, MS-HCI student
  • Irene Ong, MS-HCI student
  • Calvin Mammen, MS-CS student
  • Theresa Hsieh, MS-HCI student
  • Jessica Roberts, Faculty advisor

Accessible Oceans

Accessible Oceans

Co-design sessions with oceanographers are helping elicit the aspects of visualized oceanography data that are crucial to convey to BVI and novice audiences.

Project Description

Accessible Oceans: Exploring Ocean Data Through Sound is a pilots & feasibility study in the NSF’s Advancing Informal STEM Learning program (NSF AISL #2115751). The interdisciplinary project team is exploring how oceanography Data from the Ocean Observatories Initiative can be sonified and made interactive in a museum exhibit tailored to visually impaired learners.

The project is a collaboration between the Woods Hole Oceanographic Institute (WHOI), Your Ocean Consulting, the University of Oregon, and Georgia Tech.

Visit the project website here.

Publications & Presentations

  • Roberts, J., Lowy, R., Li, H., Bellona, J., Smith, L., and Bower, J. (2023). Breaking Down the Visual Barrier: Designing Data Interactions for the Visually Impaired in Informal Learning Settings. Paper presented at the annual conference of computer-supported collaborative learning (CSCL 2023). Montreal, Quebec, Canada. Read the paper here. View our lit review dataset here.
  • Bower, A., J. Bellona, J. Roberts, L. Smith 2023. Accessible Oceans: Exploring Ocean Data Through Sound, presented at 2023 Sonification World Chat Meeting, 23 May. Online.
  • Bower, A., J. Bellona, J. Roberts, L. Smith, 2022. Accessible Oceans: Exploring Ocean Data Through Sound (ED12A-03), presented at 2022 AGU Fall Meeting, 12-16 December.
  • Braun, R., L. Karlstrom, A.S. Bower, M.O. Archer, 2022. Listening to Our World: Sonification Applications in Research, Education, and Outreach Town Hall (TH13H), presented at 2022 AGU Fall Meeting, 12-16 December.

Project Team

  • Dr. Amy Bower, WHOI [PI]
  • Dr. Jon Bellona, U Oregon [co-PI]
  • Dr. Jessica Roberts, Georgia Tech [co-PI]
  • Dr. Leslie Smith, Your Ocean Consulting, LLC [key personnel]
  • Huaigu Li, CS PhD student
  • Rachel Lowy, HCC PhD student
  • Haisley Kim, MS-HCI student
  • Joy Ying, CS undergraduate student

Book release: Data Through Movement

Together with my colleague and friend Francesco Cafaro (IUPUI), I am very excited to announce the release of our new book, Data Through Movement: Designing Embodied Human-Data Interaction for Informal Learning. https://doi.org/10.2200/S01104ED1V01Y202105VIS013.

Cover of printed book Data Through Movement

This project has been a long time in the making, beginning with our initial collaboration on the CoCensus project over 10 years ago! This volume, part of the Morgan & Claypool Synthesis Lectures on VIsualization, provides an overview on theoretical foundations of embodiment and discusses empirical findings from multiple human-data interaction (HDI) projects in museums.

You can find more information and details about where to get it here .

Developing Technologies to Support Remote Art Education for K-2 Students

Image of proposed VR museum platform.
A virtual museum platform built in Mozilla Hubs provides users a chance to play with shapes and colors in authentic art pieces.

Description

Art is an important factor in child development. Research has highlighted art education’s role in children’s acquisition of the economic, cultural, and civic capital required to sustain a communities’ cultural resources. For K-2 learners, art education also contributes to the development of fine motor skills, cognition, and interpersonal relationships. The incorporation of art museum visits into school curriculum is one of the ways students can have repeated, sustained engagement with art. Recognizing this, many art museums provide digital resources to support the integration of these resources into classrooms, but little research investigates classrooms’ use of these resources. Additionally, little research investigates technology designs that support interactions and needs of K-2 learners, teachers, art museum educators, and docents in fully remote art education settings. 

This project uncovers key implications and design requirements for developing effective, remote art education environments for K-2 learners and educators. From these requirements we made novel, instrumented tangible tools that can create beneficial learning opportunities where K-2 learners can practice fine motor skills and age-appropriate art principles. This project also studies how the integration of these tools into virtual environments can support K-2 learners in remote settings.

Understanding K-2 Remote Art Education Needs

Repeated, meaningful art education experiences for young children are often coordinated across school settings, which support daily art encounter opportunities, and museums, which provide more isolated encounters with artworks. To understand the needs of classroom and museum educators in remote K-2 settings, we conducted a needs survey and interview. We developed 3 sets of design requirements covering their needs. We also developed a novel typology of existing art education platforms, identifying where educators’ needs are and are not met.This project is described in our 2022 Interaction Design and Children (IDC) Paper: Ready, Set, Art: Technology Needs and Tools for Remote K-2 Art Education

The Chameleon Clippers

Lead: Gennie Mansi, HCC PhD student

To satisfy the need for young learners to receive appropriate feedback as they practice fine motor skills in remote environments, we created the Chameleon Clippers. This low cost instrumentation of classic school scissors uses line sensors and a custom built Processing application to alert users when they deviate from the line they are attempting to cut.

This project is described in our 2022 Computer-Supported Collaborative Learning (CSCL) paper:

Mansi, G., Boone, A., Kim, S. & Roberts, J. (2022). Chameleon Clippers: A Tool for Developing Fine Motor Skills in Remote Education Settings. In Proceedings 2022 International Conference on Computer Supported Collaborative Learning (CSCL). Hiroshima, Japan and online. Best design paper nominee

PLVM: Play and Learn in the Virtual Museum

Lead: Sue Reon Kim, MS-HCI

Increasing shifts to online and remote education in recent years — greatly augmented by the Covid-19 pandemic — have created a new challenge for museum-based art educators: How can young children have impactful art engagement experiences on remote museum tours?

In this project, I explore the K-2 art educators’ pedagogical needs in facilitating the remote art tour through co-design, and offer a technological solution, Play and Learn in Virtual Museum (PLVM). PLVM (pronounced as Plum) is a web-based digital platform for K-2 art educators and students to aid the followings:

  1. Simple technology set up for students (3 point setup)
  2. Ability to deeply looking at the art utilizing different interaction models such as discussion, drag-and-drop, point, multiple choice and storytelling
  3. Integration with existing educational tools and different media such as 360 view, youtube, google slides, and images from the museum collection database
  4. Sharable hands-on activities
  5. Texture sound for the 3D elements

This platform offers both moderated and unmoderated versions to make it more accessible for limited resourced art educators.

Demo Video

Project Team

  • Gennie Mansi, HCC PhD candidate
  • Sue Reon Kim, MS-HCI graduate
  • Ashley Boone, HCC PhD student
  • Jessica Roberts, Faculty

Crossroads: Helping cisgender students understand their transgender peers in Greek Life

Project Description

Transgender and nonbinary (trans/nonbinary) college students face unique challenges as members of Greek Life organizations, as their identities contradict the heteronormative culture of most Greek communities. Despite these differences, trans/nonbinary students still exist within Greek communities, and many Greek organizations have been making efforts to be inclusive of these members. Institutional-level changes, however, do little to prepare individual Greek students to be inclusive of their trans/nonbinary peers, and because GT Greek Life does not mandate LGBTQ+ education, cisgender students struggle to bridge this gap in knowledge, while transgender students are expected to act as educators. Crossroads is an educational mobile application designed for cisgender Greek students. It teaches users about LGBTQ+ concepts, specifically focusing on trans/nonbinary issues within Greek communities.

The Crossroads application has four features: learning modules, community messaging, a glossary, and external resources. In our proof of concept prototype, we designed and tested two modules, one of which focused on basic LGBTQ+ terminology and concepts, and a second that honed in on trans/nonbinary issues. These modules are followed up with ‘daily challenges’ that utilize a spaced repetition model to support long-term learning. We hypothesize that this learning environment will provide cisgender Greek students with an easy-to-access LGBTQ+ learning environment that can be easily integrated into a college student’s schedule and will act as an alternative to relying on trans/nonbinary students for education.

Publications

Stephanie Baione, Yiming Lyu, Audrey Reinert & Jessica Roberts (2022) Crossroads: a transgender education platform for Greek life students, Journal of LGBT Youth, DOI: 10.1080/19361653.2022.2070813
Please contact me if you would like to read the article but do not have access and I will be happy to send you an access link.

Video

Project Team

  • Stephanie Baione, MS-HCI 2021
  • Yiming Lyu, MS-HCI 2021
  • Audrey Reinert, co-advisor
  • Jessica Roberts, advisor

Coal Pollution Impacts Explorer

Description

When Sulfur Oxides (SOx) are emitted from power plant facilities, they do not fall directly to the ground. They are carried by air currents, sometimes great distances. Modeling of atmospheric transport and dispersion of these particles can estimate fine particulate matter (PM2.5) source impacts attributable to SOx emissions from each of the more than 1,200 coal-fired electricity generating units in operation in the United States between 1999-2018.

The Coal Pollution Impacts Explorer (C-PIE) is a web-based interface designed to visualize and scaffold atmospheric data and modeling for a public audience. Users can investigate the sources of pollution in their home county’s air, examine where pollution from a nearby facility disperses, and explore trends over time as facilities install pollution-mitigating scrubbers in response to legislative actions.

Research on the C-PIE platform investigates how data interactions can be scaffolded to support inquiry and engagement for public audiences.

The interactive C-PIE platform can be found at: https://cpieatgt.github.io/cpie/

Below you will find representations of some of our iterative development work on the platform. To read more about the impacts of coal pollution, read our recent article in the Journal Science:

Henneman, L., Choirat, C., Dedoussi, I., Dominici, F., Roberts, J., & Zigler, C. (2023). Mortality risk from United States coal electricity generation. Science, 382(6673), 941-946. https://doi.org/10.1126/science.adf4915

Images & Videos

Research Team

  • Jessica Roberts
  • Lucas Henneman (George Mason University)
  • Sue Reon Kim, MS-HCI 2021
  • Srijan Jhanwar, MS-HCI 2022

Gennie represents TILES at ISLS 2021

Congratulations to Gennie Mansi for her virtual poster at today’s CSCL poster session at ISLS2021!

Embodiment and Social Interactions in a Class Virtual Reality Poster Session

Gennie Mansi, Blair MacIntyre, & Jessica Roberts

Abstract: There is a growing enthusiasm to use VR to improve remote student learning experiences. However, incongruities between students’ virtual embodiment – as avatars in the virtual environment – and physical embodiment – from their biological bodies – can significantly impact them. We observed a university class poster session held entirely in an immersive Mozilla Hubs environment. We found that incongruities in embodiment created both challenges and significant opportunities for students to collaborate and learn in a shared space.

Download a PDF of the poster here: CSCL2021_poster_final

FossilVR: Supporting literacy skills through science

A Virtual Reality space for teaching 3rd grade literacy skills

Description

Students are expected to learn how to make inferences in the third grade, but few high-quality resources are available to help students master this skill. Frequently it is practiced by giving students pictures and asking them to infer something about the picture (for example, from a photo of children standing in front of bicycles at a beach, a student could infer that it’s summertime, that they are siblings, that they rode their bikes to the beach, etc.). Teachers evaluate the inferences based on whether or not they are plausible, but often students are left to make up a set of disconnected inferences. FossilVR is a novel virtual environment that grounds the skill of making inferences in an authentic context: a paleontological fossil dig.

Students travel through the virtual environment with Dr. Hannah, the lead paleontologist at the site, and dig up fossils, about which they are then asked to make observations and inferences in their field notebook. The notebook contains scaffolds to guide noticing to help students create an argument about the characteristics of the specimen. We hypothesize that this system will increase the quality of inferences made, support argumentation skills, and create a more enjoyable learning experience compared with traditional methods.

Publications

  • Roberts, J., & Leinart, K. (2022) How Big was a Triceratops, Really? Using Augmented Reality to Support Collaborative Reasoning about Scale. In Tissenbaum et al. Learning at the intersection of physical spaces and technology. Symposium to be conducted at the 2022 International Conference on Computer Supported Collaborative Learning (CSCL). Hiroshima, Japan and online.  
Video introduction presented at CSCL Physical Spaces Symposium

Demo Video

Research Team

Morgan Chin, MS-HCI 2021

Kyle Leinart, MS-HCI

Anirudh Mukherjee, MS-CS

Jessica Roberts, advisor

Blair MacIntyre, collaborator

Molly Porter (NHMLAC), collaborator