When we speak of AI and technological advancements, we often speak of power — the power to predict, optimize, automate, and disrupt. But what’s often left unspoken is another form of power: the power to exclude.
In this moment of accelerating AI development, tech advancements, new innovations being rolled out as a result, we’re witnessing a global geopolitical scramble to shape the norms, markets, and guardrails of intelligence(ent) systems. Much of this discourse is playing out in boardrooms and legislative chambers far removed from the everyday realities of the 3 billion people who remain offline or poorly connected, even though their lives can and will be impacted whether directly or indirectly.
Here’s why I’m writing this piece:
If we are not intentional, we will hard-code inequality into the infrastructure of the future.
In a recent presentation on using AI to address urgent health needs in underserved contexts, a colleague from Viamo shared a poignant reminder: to always center “the 3 billion.” Those whose access to the internet is limited or nonexistent. It was a powerful truth—infrastructure is not just cables and chips. It’s also context. It’s about designing for communities who access information not through smartphones, but basic feature phones. Who navigate risk not through predictive analytics, but through trust, intuition, and lived experience. Who are too often treated as data sources, rather than decision-makers.
Reflecting on this, I found myself returning to a challenge I faced while leading the strategy to build Free Knowledge Hubs within the Wikimedia movement. How do you build systems of collaboration within ecosystems that have long been fragmented? How do you create governance structures that balance innovation and accountability, especially when the terrain is uneven, and the players have unequal resources (both financial and technical)?
The answer, I’ve found, lies in systems thinking. People, Systems Thinking is not just a buzzword, but a discipline of seeing connections between voice and infrastructure. Between trust and governance. Between exclusion and innovation. When we approach AI and digital transformation from this lens, we begin to ask better questions. Who is this tool built for? Whose language does it speak? Whose intelligence does it prioritize, and whose does it ignore?
AI policy cannot only be about managing risk in the Global North. It must also be about expanding opportunity in the Global South. That means not just exporting pre-trained models, but co-creating context-driven solutions. Not just debating safety, but investing in digital sovereignty, inclusive data ecosystems, and infrastructure that reflects how people actually live and work. It means viewing the digital divide not simply as a gap to close, but as a design failure to correct.
We are standing at a crossroads. AI can either become yet another tool for the concentration of wealth, power, voice, access, and opportunity. OR (and this is the work that fuels me) it can be harnessed to build distributed systems of knowledge, care, and justice.
We need investments beyond data centers and into community networks, multilingual data stewardship, and governance that starts from the fringes. We need governments willing to move beyond copy-paste regulation, or the misuse of AI for surveillance and political suppression, and instead build norms rooted in cultural context and human rights, and encourage tech that improves education outcomes, economic livelihoods, access to better health etc. We need technologists and policy thinkers who can hold the tension between speed and inclusion, and who understand that relevance is not universal, and that innovation without justice is a hollow promise.
I am painfully aware that there is no “one-size-fits-all” in this space. But there is a common principle that should guide us all: inclusion must be foundational, not ornamental.
Because the real race is not against China. The real race for Low and Middle Income Countries in AI and Tech, is against designing futures that not only leave billions behind, but widen the gaps and deepen the inequalities we suffer today.
If we build AI policy with humility, solidarity, and historical awareness, we won’t just make better technology. We’ll build a more connected, dignified, and human world.
Do you hear the call and will you heed this call dear reader?
PxP is led by Yop Rwang Pam, a systems strategist and philanthropic advisor known for helping bold institutions navigate complexity and unlock transformative clarity.
Set Up a Quick Consult
You are not here by accident. If you’re holding big questions and need a trusted thought partner, let’s begin with a quick clarity consult.
You must be logged in to post a comment.