AI Research
Similar to the month-long Internship Program that happens at the end of junior year, we are given another chance at an off-campus experience right before we graduate high school. Although I thought about returning to my previous internship with Matt in the Bay Area, I wanted something different—something that would place me in a new environment, with new people, and challenge me in new ways. Luckily for me, earlier that year I unexpectedly stumbled into an opportunity to assist in some research about how AI impacts learning at UC San Diego. Sarah Segall, then a third-year undergraduate working at the Social Cognitive Development Lab, had been interviewing students and teachers about how they were using AI in school. One of the schools she visited was High Tech High Mesa, and since I had my own thoughts about how AI was shaping my learning, I was eager to share them.
What began as a short, twenty-minute interview quickly turned into a much longer conversation—not just about the practical ways students were experimenting with AI, but also about the bigger-picture questions of how these trends might reshape education over the next decade. By the end, it seemed clear that the lab could use some extra hands, and since I was looking for an externship, the timing lined up perfectly. During the four weeks I spent at UCSD, I reviewed data the lab had been collecting and got a crash course in the world of academic research. I learned about IRB protocols and ethical research practices and I was able to contribute to a project that asked how AI might be woven into classrooms in ways that enhance learning, rather than diminish it.

My externship wrapped up with a presentation on the experiment we were preparing for that fall, but I came back later in the summer to help run the actual study at my high school. Thanks to the block schedule, we designed a crossover experiment: one class received traditional instruction in Week 1 while another used an AI tutor, then in Week 2 the conditions switched to balance out differences between groups. The AI tutor was built on principles of learning science, trained only on lesson-specific content to avoid hallucinations, and designed to guide comprehension rather than provide direct answers. Those lessons focused on the policies and structure of water systems by researching case studies where water quality or availability became issues for real people.
To measure the impact, we collected several layers of data. Students completed pre- and post-tests to track learning gains on the content material and the final deliverable. Surveys captured motivation, engagement, and confidence, while open-ended questions asked students to describe both challenges and helpful aspects of using the AI tutor. The AI chat logs themselves became a source of insight: we tracked the number of back-and-forth prompts, categorized how students used the tutor through the ICAP framework, and analyzed time on task and sentiment—whether students expressed confusion, frustration, or clarity. Together, this created a picture of not just what students learned, but how they engaged with AI as a partner in the process.

More recently, I was selected for the Nippon Foundation HUMAI Program, a new initiative that brings together students and researchers exploring the intersection of humanities, social sciences, and AI. The program was launched with the belief that AI will not only transform science and technology, but also fundamentally reshape the humanities by raising questions about what it means to be human, how knowledge is produced, and how culture evolves when machines enter the picture. By funding young scholars to experiment in these traditionally non-technical fields, the foundation hopes to spark ideas that are both critical and creative—perspectives that push beyond simply "using" AI into asking what its impact on society might be.
This year marks the very first HUMAI cohort, which makes it feel all the more exciting to be part of it. The program creates a forum where students from different disciplines and even different countries can share projects, challenge assumptions, and explore ideas that don't always fit neatly inside traditional academic structures. At our first gathering, we'll each be presenting posters on our current work, and I'll be sharing the UCSD classroom study I've been involved in. For me, this feels like the perfect bridge between my direct experience in the lab and the broader conversations about AI's role in education. Beyond the poster, I hope to use this program as a platform to keep exploring how AI might transform the classroom—both the risks it poses and the opportunities it opens up.

Ultimately, I don't believe teachers will ever be replaced by AI, because education has never been just about transferring knowledge. At its core, it's about human connection and the collective growth that happens when people learn together. I do believe that AI will force us to rethink the role of the teacher: less as gatekeepers of information and more as facilitators of curiosity—mentors who design projects that connect student interests with real-world problems and opportunities to create value. In many ways, AI is exposing the flaws in an education system built on memorization and the downloading of knowledge to be regurgitated on a test. At the same time, AI is making it clearer than ever that deeply human abilities—storytelling, leadership, vision-setting, and the capacity to organize complexity and chaos—are the ones that matter most. My hope is that this becomes a wake-up call: that education must evolve to cultivate these enduring skills rather than clinging to outdated models of the past.