A team from the University of Washington (UW) explored how artificial intelligence (AI) tools like ChatGPT and Dall-E, developed by OpenAI, interact with children’s creative processes. The study involved 12 Seattle-area kids aged seven to 13. The researchers found that children often needed support from adults and peers to meaningfully integrate AI into their creative practices. The study also revealed that AI systems are not built with children in mind, leading to a mismatch between children’s expectations and the system’s capabilities. The research was led by Michele Newman, a UW doctoral student, and presented at the ACM CHI Conference on Human Factors in Computing Systems.
AI and Children’s Creativity: An Exploration
Artificial Intelligence (AI) has been making waves in the creative industry, with AI-generated art winning competitions in digital art and photography. This has sparked concerns about AI replacing artists and even outperforming humans in creativity. However, some people are exploring these tools as a means to enhance their creative processes rather than replace them. A team of researchers from the University of Washington (UW) decided to investigate how AI might influence creativity in children.
The UW-led team conducted six sessions with a group of 12 children aged seven to 13 from the Seattle area. They used AI tools like ChatGPT and Dall-E to observe how the children’s creative processes interacted with these tools. The researchers found that for the children to meaningfully incorporate generative AI into their creative practices, they often required support from adults and peers. The findings were presented at the ACM CHI Conference on Human Factors in Computing Systems.
The Motivation Behind the Research
The study’s lead author, Michele Newman, a UW doctoral student in the Information School, explained the motivation behind the research. Before joining UW, Newman was working on a project using natural language processing (AI) to measure creativity in elementary school children. When ChatGPT was released, Newman was keen to see how it might impact children’s creativity.
The initial response to this new technology was fear, with many schools banning it due to concerns about its potential harm to children. The project aimed to explore a balanced approach, where the technology neither harms nor replaces jobs but supports and builds meaningful experiences for children. The goal was to look to the future and develop ethical and meaningful practices around this technology.
Designing the Study
The primary methodology in KidsTeam, a program where adults and kids co-create technology products for children, is co-design. The researchers used this approach, putting the children in front of technology like OpenAI’s ChatGPT and Dall-E, or Google’s music generator Magenta, to observe their reactions. They wanted to understand the children’s considerations, frustrations, and what it means to have a tool that can do the creation for them.
The researchers also provided the children with a more structured experience, balancing open-ended exploration with more directed tasks. For instance, they had the children use techniques like comic boarding, where they create comics about potential good and bad uses of AI.
Key Findings
One of the most significant findings was that these AI systems are not designed with children in mind. There was a clear mismatch between what the children expected the systems to do and what they could do. For instance, if the AI system didn’t know about a video game that the children were familiar with, the children might conclude that they were smarter than the system.
The children’s language also differs from adults’, which can be an issue when they try to express themselves creatively. For example, one child wanted the AI to “talk like Darth Vader” to help him write a better Star Wars story. The researchers had to guide him through the process, indicating that children need extra instruction to understand these systems.
Ethical Considerations and the Future of AI in Creativity
The children also had nuanced takes on ethical considerations. For instance, when asked about using AI to write a birthday card, they had different opinions depending on how much the AI was used. They also pondered deep, existential questions about what it means to be an artist and the authenticity of AI-generated creations.
The study showed that children’s creative processes and goals adjusted as they interacted with the AI systems. Sometimes they added extra context to get the desired output, while other times they changed their ideas when they didn’t work. The challenge lies in supporting children and helping them understand their individual creative processes. The interaction with AI is not just about inputting a prompt; it’s about working iteratively with the system while being supported by peers and adults. This support network makes a meaningful experience with these systems much more likely.
External Link: Click Here For More
