The AI-Era Digital Divide
Listening to Mexican Youth Voices

Source: Secretaría de Economía (DataMéxico). Profile of Chihuahua. Retrieved from https://www.economia.gob.mx/datamexico/en/profile/geo/chihuahua-ch?redirect=true
One Mexican teenager admitted in an on-line survey, “To summarize information or conduct research, honestly, it makes my work much faster, but it does worsen my research skills.” She was referring to AI as an educational tool.
In collaboration with the Ministry of Education in Chihuahua, Mexico, and public high schools across the state, we conducted this anonymous online survey to understand how adolescents are using generative AI. We wanted to inform future AI integration strategies in schools. Students were recruited from public COBACH high schools (Colegio de Bachilleres) throughout Chihuahua, the largest state in Mexico. The survey targeted students in their fourth and sixth semesters (equivalent to 11th and 12th grades), with a total of 7,739 participants. We explored students’ access to AI tools, frequency and purpose of use, self-assessed skill levels, attitudes toward AI and its perceived effects on learning and personal development. Example questions covered topics like: “How often do you use generative AI tools for school?” and “How skilled do you consider yourself in using AI to complete academic tasks?”
The truth is we don’t know much about generative AI and education in Latin America, and we were hoping to fill the gap. In a region where the digital divide has long centered on disparities in access and infrastructure, the rapid integration of emerging technologies raises urgent questions: What does equitable technology use look like in the age of AI? And how prepared do students feel to navigate this evolving landscape?
Worldwide, including Latin America, recent advancements of large language models (LLMs) and the popularity of generative AI tools like ChatGPT and Gemini have led to a surge in AI use in education—and with it, much debate over its impact on teaching, learning, and academic integrity. These developments have prompted global reflection on their potential impacts.
Recent surveys have documented high levels of student engagement with AI tools in education, alongside persistent gaps in access, parental awareness, and institutional guidance (Common Sense Media, 2024; European Parliamentary Research Service, 2025). This trend is evident in both Europe and the United States. Was it any different in Latin America?
To shed light on these questions, we conducted this large-scale survey of adolescents in Chihuahua. Among the students who participated in our study, access to AI tools was relatively widespread; however, levels of confidence, formal instruction and critical engagement varied considerably.
These findings point to a layered and evolving form of the digital divide—one that is no longer defined solely by access and adoption, but increasingly by the quality of use and the presence (or absence) of informed, pedagogically grounded guidance in navigating emerging technologies.
Widespread Access and Usage, yet Uneven Infrastructure
We first have to look at the digital divide through the lens of access and usage, which form the foundation for participation in digital learning. In our survey, about 90% of the students indicated that they had access to and have tried generative AI tools such as ChatGPT, Gemini or Copilot.
However, even amidst this near-universal adoption, we must not overlook the subgroup of students who appear less likely to engage with these tools: about one out of every ten students in our sample reported never having used them. We do not intend to suggest that achieving a 100% usage rate should be the goal. Some students may have made a deliberate and pedagogically sound decision not to use such tools, aligning with their personal learning preferences. However, we suspect that others may have been unable to access these tools due to insufficient technological infrastructure—a barrier that warrants attention.
One important infrastructure for generative AI usage is high-speed Internet. Indeed, our survey revealed variations: some reported having both broadband and cellular access, others had only one form, and some indicated they had unreliable or no access to either. Because many AI applications require stable Internet connectivity, these disparities naturally translate into unequal opportunities for use. The data reflect this pattern: 91% of students with both broadband and cellular Internet had used AI tools, compared to 81% with only cellular network, and just 61% among those with inconsistent access.
As access to AI tools becomes increasingly widespread, it becomes all the more critical to pay attention to those who remain excluded due to structural limitations. High overall adoption rates can obscure the challenges faced by a small but significant group of students who lack adequate access. Without targeted support to address these disparities, students who are already on the margins may be further left behind, deepening existing inequities in digital learning environments.
Different Uses, Common Dilemmas
While it’s important to address overall access and usage, it is equally critical to examine the disparities in how students are using AI. Decades of research on technology adoption have demonstrated that different patterns of use yield different outcomes. The same holds true for AI: some forms of engagement may short-circuit students’ cognitive processes, while others may empower them to pursue creative and innovative activities.
In our survey, we asked adolescents to tell us the types of activities for which they use AI tools. The most common uses were pragmatic in nature—such as seeking information, completing homework or assisting with research tasks. At the same time, some students engaged with AI in more creative ways, such as co-authoring stories or poems, experimenting with new ideas, or generating images and videos.

Adolescents are clearly reflecting on the potential implications of different patterns of AI use for their development and skill sets. Their responses revealed a coexistence of optimism and caution regarding the role of AI in shaping their learning experiences and personal growth. When asked to evaluate the impact of AI on core academic skills—on a five-point scale ranging from “worsens a lot” to “improves a lot”—many students expressed optimism about AI’s positive influence on their task completion, curiosity, creativity and ability to persist through challenging tasks. At the same time, a substantial number voiced concern that AI use may negatively affect these domains—particularly their capacity for independent thinking.

Student voices captured in the survey highlight the nuanced ways they are thinking about AI:
“Siento más confianza al hacer mis actividades, pero me siento desconfiada porque pienso menos.”
“I feel more confidence when doing my (school) activities, but I feel less confident because I think less.”
“No siento que aprendo; me siento inseguro con la información que tiene la IA y siento que no es fiable usarla.”
“I don’t feel like I learn; I feel insecure about the information AI provides and feel it’s unreliable to use.”
“Menos confianza, porque lo utilizo como apoyo en mis investigaciones y muchas veces la información es equivocada.”
“Less confident, because I use it to support my research and many times the information is incorrect.”
Their responses reveal a quiet tension: students are not rejecting AI; rather, they are eager to engage with it while recognizing that such tools carry both promise and potential pitfalls. One student summed it up simply:
“Debemos pensar en la IA como una herramienta con sus consecuencias.”
“We should think about AI as a tool with its own consequences.”
Literacy Gaps in the Age of AI
As students are aware, AI has the potential to bring significant benefits, but also to introduce harm—outcomes that may depend heavily on their ability to use these tools effectively, or what we might term AI literacy.
In fact, many students, despite using AI regularly, expressed uncertainty about their own ability to use it well. In our survey, when asked to self-assess their AI skills across various tasks—such as generating text, images, or audio—the average ratings fell between “not very skilled” and “somewhat skilled.” This modest self-assessment was consistent regardless of whether students identified as frequent users, suggesting that usage alone does not necessarily translate into competence.
Moreover, only 8.1% of students reported regularly verifying the accuracy of information generated by AI tools. This indicates that many may lack the critical literacy skills required to evaluate AI outputs.
These findings point to a central challenge: students are not asking for less AI; rather, they are expressing a need for moresupport. Without adequate guidance, they are navigating AI-rich learning environments with a patchwork of informal knowledge, experimentation and guesswork. Such desire is reflected in our data. Seven in ten students surveyed expressed strong interest in receiving explicit instruction on how to effectively use AI for educational purposes. Even among those who were hesitant or infrequent users, interest remained high—suggesting that reluctance may not stem from a lack of curiosity or motivation, but rather from unmet needs for guidance.
However, even among the 58% of students who reported having received some form of formal instruction on AI use, self-rated skill levels were no higher than those without such training. Both groups reported similar levels of uncertainty and reliance on AI, suggesting that existing instructional efforts may be too superficial, inconsistent or poorly aligned with students’ actual learning needs. These findings point not merely to a need for more instruction, but for sustained, thoughtful, and culturally relevant AI literacy curricula—instruction that empowers students to engage with AI tools critically, creatively and responsibly.
At the same time, teachers themselves are facing a parallel challenge. Many have had limited opportunities to learn about AI and are still climbing their own learning curve. The task ahead, then, is broader than preparing students alone; it also requires equipping educators with the knowledge and confidence to guide the next generation in navigating AI as part of their education
Looking Ahead
Our data, focused on adolescents in Chihuahua, represents only a first step toward understanding how young people in the region are engaging with AI. Much more research is needed across Mexico and Latin America to capture the diversity of students’ experiences and challenges, and reveal whether local contexts—such as infrastructure, policy or social norms—are shaping digital divides in the age of AI. Addressing the new digital divides requires attention to its multiple dimensions: access, usage, literacy and trust. While infrastructure remains foundational, it is equally important to ensure that students are equipped with the critical skills and opportunities needed to make informed choices about whether, when and how to use AI tools. Policymakers should prioritize the development of learning opportunities that foster critical evaluation, ethical reflection and responsible use. Equally important is investing in teacher preparation so that educators can guide students through these questions in ways that are thoughtful and context-sensitive. Schools should also consider moving away from overly restrictive policies toward frameworks that create space for exploration, dialogue and discernment.
Policymakers and educators should also listen to the voices of those who are the first generation of AI users at school. Their experiences and reflections can guide the evolving conversation about AI in Latin America.
Trisha Thomas is a Visiting Scholar and former Postdoctoral Researcher at the Harvard Graduate School of Education, where she worked under the supervision of Dr. Ying Xu. She studies how children interact with artificial intelligence in multilingual and educational contexts. Her work focuses on how AI can support language and literacy learning while promoting equity for diverse learners.
Ying Xu is an Assistant Professor at the Harvard Graduate School of Education. She studies how artificial intelligence can be designed and used to support children’s learning, growth, and well-being. Her work seeks to make sure AI becomes a positive force in young people’s lives, while also addressing the risks it may pose.
Related Articles
Editor’s Letter: Technology
In celebration of the 30th anniversary of the David Rockefeller Center for Latin American Studies at Harvard (DRCLAS), ReVista is focusing on inequality.
QR and Tocapus: Visual Communication of the Andes
Connecting the digital present with pre-Hispanic symbols is an opportunity to reclaim these signs and link them with new meanings. As a Peruvian artist and daughter of a Quechua-speaking family (an Indigenous language of the Andes), I began a journey of exploration that helped me understand the importance of textiles in this region of my country.
Technology and Collective Memory: Commemorating the Unidad Popular
The one thousand days of Salvador Allende’s presidency, from 1970–1973, marked a period of political innovation in Chile.

