AI AND SOCIAL CONTROL
- diegorojas41
- Mar 12
- 6 min read

Current Reality: AI Surveillance Today
AI is already a big part of modern life, of our everyday lives. Not only to make decisions faster or to handle customer service, but also to monitor, regulate, and even control society’s daily operations. Surveillance systems worldwide increasingly rely on AI-driven cameras, facial recognition (technology that identifies people by analyzing unique facial features like eye spacing and bone structure), and predictive policing algorithms (software that analyzes past crimes and demographics to forecast future criminal activity), to assess threats, identify faces in crowds, and target individuals based on data analysis.
In countries with strict regulations, AI has been used to monitor online behavior and evaluate “loyalty” to the state.
Side Note: Donald Trump, who could again become the president of the US, has consistently demanded absolute loyalty from his supporters and officials. Based on this, it's not hard to imagine him developing a system where social and political loyalty becomes a requirement for participation in society. Under such a system, personal advancement and opportunities might depend more on showing loyalty to those in power than on merit or democratic principles. (1)
For instance, China's social credit system (a nationwide scoring system that rates citizens' behavior and trustworthiness), combines AI and surveillance, awarding or docking points to incentivize or punish citizens’ behaviors. These systems raise serious questions about privacy and people´s freedom as the "watcher" shifts from humans to machines with algorithms (step-by-step computer instructions that process data to make decisions - like recipes telling AI how to analyze information), designed to follow the needs and wants of those in power.
Potential Future Scenarios: AGI Serving the Elite
Imagine a future in which AGI (Artificial General Intelligence - AI systems that can match or exceed human-level thinking across any task, unlike current AI that excels only at specific functions) - AI with the capacity to understand, learn, and act across a wide range of tasks - answers only to the wealthiest and most powerful individuals.
This AGI would not only serve their corporate or political ambitions but also manage complex systems to predict, track, and shape public opinion in their favor. With AGI’s power, the elite could protect their assets and interests with unprecedented precision, understanding economic markets, social unrest, and even climate impacts before they arise, allowing them to take preventive measures that benefit them while excluding others from protection or opportunity.
Side Note: What is going to deter tech leaders like Sam Altman, Mark Zuckerberg, Elon Musk, or any other billionaire—who currently control AI development—from using advanced AI for their own benefit first? Once they achieve AGI (Artificial General Intelligence), what prevents them from using it to gain unprecedented imperial power and total control? And consider the greater danger: what if someone like Trump, arguably the most narcissistic figure in modern politics, gains access to this technology? The temptation to use AGI to control not just America, but potentially the entire world, would align perfectly with his demonstrated authoritarian tendencies
For instance, AGI could analyze billions of data points (individual pieces of information ranging from shopping habits and location data to social media posts and internet searches), to predict which communities are most likely to resist authoritarian policies or which sectors of society might be inclined to protest wealth inequality.
By preemptively curbing such groups or manipulating narratives, the elite could stop the progress of grassroots movements or opposition before they even gather momentum. When technology allows for the full control of communication channels and economic leverage, individuals may find it nearly impossible to speak freely or resist, facing a future dictated by the whims of the elite.
AI as a Weapon Against Dissent

AI’s power also lies in its potential to identify, isolate, and neutralize dissent in ways previously unimaginable. If leveraged for control, it could become a direct tool against anyone attempting to resist or challenge the existing power structures. This weaponization of AI could involve automated profiling (where AI systems automatically categorize people based on their digital footprint, including online behavior, purchases, and social connections), of individuals based on social media, flagged keywords, associations, or interests - essentially using personal data as a tool for predicting, punishing, or redirecting dissenting behavior.
For example, if AGI is employed by an authoritarian government, individuals expressing frustration over inequality or injustice might find their job applications suddenly denied, their bank accounts under scrutiny, or their social media accounts silenced, creating a chilling effect on free speech and freedom of thought.
AI’s dominance in media and information platforms also plays a crucial role. Algorithms that favor certain viewpoints can be tweaked to amplify voices supportive of elite agendas while silencing critical or oppositional perspectives. The more AI infiltrates these areas, the more it may reinforce an illusion of consent and approval, perpetuating a cycle of silence among those who fear their opposition may result in personal, professional, or even physical consequences.
The Chinese Model: A Warning Sign
Here are a few real examples from China's social credit system.
1. Travel and Access Restrictions: A significant aspect of the social credit system is restricting movement for citizens with low scores. In some cases, people have been blacklisted from purchasing train or airplane tickets. By 2019, over 2.5 million individuals were restricted from flights, and hundreds of thousands were barred from high-speed rail. These punishments can be triggered by failing to pay fines, disrespecting legal rulings, or committing minor offenses, such as smoking in non-smoking areas or even forgetting to pay utility bills on time (2) (3) .
2. Employment and Education Consequences: Some local governments impose penalties on low-credit individuals, such as banning them from public sector jobs or promotions, or blocking their children from attending private schools. In extreme cases, social credit has been used to prevent individuals from renting apartments or receiving high-speed internet access. Certain business owners with poor ratings face difficulties in accessing loans, which impacts their ability to operate or grow their businesses (4)
3. Public Accountability: Local systems sometimes “name and shame” individuals with low scores. In cities like Suining and Rongcheng, people can lose points for actions like jaywalking or failing to adhere to recycling regulations. These scores and violations may be publicized online or on local bulletin boards, creating a societal pressure that discourages non-compliant behavior.
Additionally, societies should also be concerned about how AI surveillance could be used to track and suppress dissent. In cases where groups or communities gather to discuss grievances or organize protests, advanced AI facial recognition (technology that identifies people by analyzing unique facial features like eye spacing and bone structure) and data-gathering tools often help authorities preemptively identify and shut down any attempts at mobilization. For example, in Xinjiang, facial recognition systems coupled with AI algorithms monitor Uighur communities, analyzing patterns that could suggest dissent. Police then take preemptive actions, such as detaining community members for “re-education” before any actual protest occurs. (3) (4)
Emerging Risks and Ethical Dilemmas
With technology advancing at such a rapid pace, ethical concerns lag behind its practical application. The AI-driven world needs regulations and frameworks that ensure technology doesn’t cater only to the powerful, but instead supports universal rights and freedoms. However, those currently creating these frameworks and developing AI - largely in private, elite-run companies - are under little obligation to address these ethical dilemmas or consider the broader societal impacts of their innovations.
In an alarming trend, major AI companies are systematically dismantling their ethics teams - the very groups tasked with ensuring AI development remains safe and humane. Google fired prominent ethics researchers like Timnit Gebru and Margaret Mitchell for raising concerns about AI risks. OpenAI dissolved its safety teams right as their AI became more powerful. Even Microsoft laid off their entire ethics and society team while racing to integrate AI into their products.
These companies are not just cutting jobs, they're deliberately removing the watchdogs who were supposed to protect the public interest, leaving us increasingly vulnerable to unchecked AI development.
The message I receive from these companies is a chilling one: profit and power matter more than public safety. (5) (6)
What's at Stake: Democracy in the Digital Age
The level of social and technological integration in China's system provides insights into how data-driven governance (the use of artificial intelligence to help make and enforce government decisions and policies), and surveillance influence individual and community behavior on a large scale. This approach highlights the power and ethical complexities surrounding AI in regulatory frameworks.
And one more thought, if you think a social credit system couldn't happen in a neighborhood near you, think again. The same AI-driven surveillance and data collection technology that powers control in places like China is already being explored and adapted globally, often under the pretense of public safety or efficiency. With AI and surveillance systems shaping behaviors and enabling powerful interests to protect their stakes, control over societal behavior could easily become a widespread reality.
Thanks for reading. Abrazos.
Diego Rojas
Comments