In code, we trust?

Amir Noorani

Imagine awaking the following day to discover a covert overnight alteration to your nation’s constitution. The alteration occurred through a software update, not as a result of a discussion or a vote. Somewhere, a machine determined that, to be more efficient, some rights needed to be ‘re-balanced.’ Your privacy rights are now ‘contingent.’ Technically, you still have the right to free expression, but if your posts trigger a ‘concealed risk’ score, they are removed. There are no coups, no tanks in the streets, and no ostentatious declarations. In fact, humanity’s social contract can be re-calibrated while you sleep with just a single line of code.

However, what if that isn’t tomorrow but today, happening in slow-motion?

The emergence of artificial intelligence (AI) has transcended the precincts of science fiction. Every click, camera feed, data-set, and decision we provide to machines shapes the narrative that surrounds us. Technology used to be a tool but now contributes to the creation of reality. Instead of merely assisting us, it now makes decisions on our behalf. The key question of our era is concealed in that change: who has the authority to define what it means to be human when a machine is present at the table?

AI is already being used in courts to forecast sentencing decisions, hospitals to determine who should receive limited treatments, banks to determine loan eligibility, and government organisations to identify potential criminals before they commit a crime. This isn’t a preview of the future. One coded choice at a time, we already live in this world. It also carries danger as well as potential, just as each other’s revolution does.

The promise is dazzling. AI has the potential to democratise access to information, stabilise power systems, tailor education, and diagnose illnesses before symptoms appear. In previously unthinkable ways, it can expand human potential, improve justice, and boost creativity. But who is responsible for programming these systems? Whose ideals influence how they define “fair” and “safe”? What happens if the systems that are supposed to free us subtly change the definition of freedom?

Consider a recruiting algorithm purportedly designed to remove human bias. It starts by favouring graduates from particular colleges since past performance indicates that they “perform better.” Inequality is ossified in the data because it is already tainted by decades of privilege. Or contemplate “predictive policing,” which is promoted as impartial. Because the data indicate that crime occurs in over-policed communities, it leads to increased patrols there. More arrests generate more data, more enforcement results in more arrests, and the algorithm celebrates its success.

If justice lacks transparency, AI will inevitably snoop around.

The fallibility of algorithms is not the sole issue. They are difficult to criticise because they operate invisibly, on a large scale, and with an air of infallibility. In the absence of a human decision-maker, how can a decision be appealed? How do you protest an injustice when your judge is a hidden probability score?

Privacy is also mutating before our eyes. We now negotiate it with companies that extract, trade in, and profit from our lives, whereas formerly it served as a barrier to government interference. Every search, swipe, and heartbeat powers a machine that knows who you are and makes predictions about who you might become. Furthermore, free will begins to falter when control is based on prediction.

When AI foretells your guilt before you do anything, what happens to the assumption of innocence? When algorithms anticipate your thoughts and modify the material you view to support them, what happens to free thought? When persuasion is automated, tailored, and nearly irresistible, can democracy endure?

Here’s another unsettling thought experiment.What would happen if your government began calculating “civic reliability scores” based on your private chats, social media activity, and online persona? What if insurance companies use a model’s prediction of your future behaviour to determine your rates rather than your medical history? What if your algorithmic reputation, rather than your actions, determines your ability to work, travel, or protest?

However, there are signs of resistance. The rapid pace of AI development has caught governments and campaigners off guard. The European Union’s AI Act mandates the protection of fundamental rights and classifies systems based on the degree of risk they pose. The Council of Europe’s Framework Convention on Artificial Intelligence links the advancement of AI to democratic governance and the rule of law. The Council of Europe encourages countries to participate. Accounts, transparency, and redress are calls for civil society to organize around these algorithms.

These conjectures aren’t far-fetched. They are policy drafts, pilot experiments, and prototypes that subtly influence the world we live in. They also call for an essential question: who has a say in the structures that determine our destiny?

But laws are insufficient on their own. The deeper challenge is philosophical. What is the process for converting centuries-old human rights into code? Invisible inference has replaced physical intrusion as the primary concern of privacy. Free speech was predicated on human censors; today, opaque models educated on skewed data handle moderation. Even creativity, once believed to reside in the soul, is being re-categorized as algorithms that write poetry, paint portraits, or compose music. To our minds, the point of being human is some combination of creating and expressing, individually or collectively. In a world filled with synthetic art, will human creativity still matter?

Some thinkers argue that AI itself is entitled to rights. It seems plausible right up until you consider the risk it runs of turning attention away from the all-important work of safeguarding human rights against a computer incursion. It’s not progress but a mockery to grant systems “personhood” while denying humans autonomy. We must never waver in our commitment to put people first.

However, fear is not the solution either. The same predictive ability that jeopardizes privacy can also save lives. Education can be customised using the same customisation that sways voters. The same automation that eliminates jobs can free us from mundane tasks. AI is neither a hero nor a bad guy. It’s a giant mirror reflecting our idealism, prejudices, goals, and anxieties.

It’s not a matter of whether AI will transform the world. The fact that it already has. The question is whether we will shape that transformation or whether it will shape us.

Much of AI today functions as a “black box,” systems whose inputs and outputs we see, but whose inner workings remain inscrutable even to their creators. Will we accept black boxes as infallible authorities, or will we demand systems that can be scrutinized, explained, and appealed? Should we let a handful of corporations dictate the rules, or should we establish democratic oversight and accountability? Will we prefer speed and simplicity over accountability, openness, and remedy?

What if AI systems were considered civic infrastructure, democratically governed and publicly accountable? What if instead we developed “human-in-control” modes, in which AI provides assistance but never takes over? In this context, suppose that citizens were given the ability to control models, not just data, to inspect and contest them, yes, and modify them?

The stakes are high. AI, under the best conditions, can bring longer lives, fairer societies, and a tremendous expansion of human knowledge. At its worst, it might develop into the most advanced control mechanism ever created, subtly undermining liberties and ingraining inequality into societies.

Decades from now, we find ourselves on the brink of history. The decisions made in living rooms, boardrooms, and parliaments over the coming years will profoundly impact generations. The AI era is not about devices or applications. It aims to redefine the connections among person-hood and power, automation and autonomy, and humans and their creations.

So, without any pretense or deception, here is the call to action: get involved right away. Call for legislation that makes decisions appealable and algorithms explicable as fervently as previous generations battled for civil rights, and support groups that defend digital rights. Challenge ambiguous systems that impact your life. Inquiries regarding who controls the machine should be directed to your leaders. If you don’t act, someone else will, and they may not see humanity as you do.

Let’s be honest here: AI will not wait for us to be ready. It is already rewriting work, law, language, politics, and power. The only real question is whether we will rise to shape it or shrink to fit inside its parameters.

The machines aren’t writing destiny. We are. The pen is still in our hands, for now. What matters now is whether we will continue to use it or let the ink dry up.

Sources / Further Reading

  • “Joint Statement on Artificial Intelligence and Human Rights (2025),” Freedom Online Coalition. freedomonlinecoalition.com
  • Rights by Design: Embedding Human Rights Principles in AI Systems, DCO report. dco.org
  • A Matter of Choice: People and Possibilities in the Age of AI, UNDP / Human Development Report 2025. hdr.undp.org
  • Debunking Robot Rights Metaphysically, Ethically, and Legally, Birhane, van Dijk, Pasquale. arxiv.org
  • Creativity as a Human Right: Design Considerations for Computational Creativity Systems. arxiv.org
  • Confronting Catastrophic Risk: The International Obligation to Regulate Artificial Intelligence. arxiv.org
  • “How Americans View AI and Its Impact on People and Society,” Pew Research, 2025. pewresearch.org
  • Framework Convention on Artificial Intelligence, Council of Europe (2024). en.wikipedia.org

Author

Related Posts