The Trust Accelerant: How AI is Magnifying Nigeria’s Existing Crises of Fraud and Disinformation

Nigeria’s adoption of artificial intelligence is racing ahead, but the societal guardrails are struggling to keep pace. While AI itself is a neutral tool, its misuse is acting as a powerful accelerant for deep-seated national problems: economic desperation that fuels a hunger for "sure returns," long-standing tactics of financial fraud, and a chronically polarized information environment. The result is not a new crisis, but a dangerous amplification of old ones, threatening to erode the trust that holds the digital economy and society together.
From Pre-existing Problems to AI-Amplified Threats
For decades, Nigeria has grappled with "419"
scams, get-rich-quick schemes, and politically motivated disinformation. What's
changing is the fidelity and scale at which
these threats can now be deployed. Generative AI tools—capable of creating
convincing text, images, voice, and video—are lowering the barrier to entry for
criminals and bad actors. The core challenge for Nigeria is not that AI has
invented fraud or disinformation, but that it has made them cheaper, more
personalised, and harder to distinguish from reality. This piece examines the
multifaceted ways AI misuse is manifesting in Nigeria, distinguishing between
well-documented trends, emerging threats, and areas where more evidence is
needed.
1. The Old Scams, Now Turbocharged: Fraud in the Age of
AI
Nigeria's digital financial landscape has become a primary
target for AI-enabled fraud, where synthetic media is used to build credibility
and automate deception.
- Hyper-Realistic
Investment Platforms: The promise of quick wealth has always been
a staple of Nigerian fraud. What's new is the level of sophistication.
Investigations into platforms like the now-collapsed Crypto Bridge
Exchange (CBEX), as detailed in ongoing reporting supported by the
Pulitzer Center, reveal the use of AI-generated marketing content,
synthetic activity logs to simulate trading volume, and chatbot-driven
customer support. This "credibility theater" helps platforms
appear legitimate long enough to attract billions of naira in deposits
before vanishing, leaving victims in their wake. The damage is twofold:
direct financial loss and a deepened public cynicism toward all digital
investment opportunities.
- Deepvoice
Impersonation on Trusted Channels: Voice cloning technology,
which can replicate a person's voice from just a few seconds of audio,
represents a significant evolution of social engineering. Globally, this
has been used to supercharge CEO fraud and emergency scams. In the
Nigerian context, the threat model is clear: WhatsApp voice notes and
calls, the bedrock of personal and professional communication, could be
exploited for urgent requests—a family member in trouble, a desperate plea
from a pastor, a last-minute instruction from an "Oga" to change
bank details for a vendor payment. While documented public cases in
Nigeria remain limited, cybersecurity analysts warn that the technology is
readily available and cheap, making it a potent tool for criminals. The
NCC's continuous efforts to update its cybersecurity framework are
critical here, but public awareness of this specific threat is still
nascent.
2. The Information Ecosystem Under Siege: From Cheap
Fakes to Deepfakes
The manipulation of information for political or social gain
is not new in Nigeria, but AI provides powerful new tools to create and
disseminate falsehoods.
- Borrowed
Authority: Deepfake Ads and Public Figures: The authenticity of
video and audio has long been accepted as "proof." This
foundational trust is now under direct attack. In 2025, the Advertising
Regulatory Council of Nigeria (ARCON) issued a public warning about
AI-generated advertisements that fraudulently used the image and voice of
President Bola Tinubu to promote Ponzi schemes. This is a landmark signal:
when the face of the highest office in the land can be convincingly faked,
the entire population becomes more vulnerable to scams, particularly on
fast-scrolling social media platforms where content is consumed without
scrutiny.
- Political
Disinformation on a Spectrum: AI's role in elections is not
always about creating flawless deepfakes. Often, the most effective
disinformation relies on a spectrum of manipulation. As noted by
Africa-focused legal and policy commentary, during Nigeria's 2019 election
cycle, a manipulated clip (a "cheap fake") was circulated to
falsely allege an opposition candidate had promised amnesty to Boko Haram.
The goal was not universal belief, but to generate anger, deepen tribal
divisions, and weaponise uncertainty. Now, generative AI can create
convincing but entirely synthetic audio or video to similar effect, making
the fact-checker's job exponentially harder. The damage is a creeping
cynicism where the very concept of a verifiable truth is undermined.
- Health
Misinformation at Industrial Scale: The COVID-19 pandemic starkly
illustrated how quickly unverified health advice—miracle cures, dangerous
preventatives—could spread via WhatsApp and social media. Research
analysing Nigeria-focused fact-checks during this period showed a high
concentration of false claims around treatments and cures. Generative AI
now allows a single piece of misinformation to be endlessly rewritten,
translated into local languages, re-voiced, and recaptioned for different
audiences, dramatically increasing its reach and resilience.
3. Systemic Risks in a Digital Economy: Bias,
Surveillance, and Integrity
Beyond fraud and disinformation, AI introduces or amplifies
systemic risks within Nigeria's digital infrastructure and institutions.
- Bias
in Automated Decisions: AI systems used in lending and
recruitment make decisions based on data. If that historical data reflects
existing societal inequalities (e.g., credit histories tied to gender or
ethnicity), the AI model can learn and automate that discrimination. This
is a concern for Nigeria's rapidly growing digital credit market. While
recent consumer-protection rules from regulators like the Federal
Competition and Consumer Protection Commission (FCCPC) now mandate
explicit opt-in consent and prohibit predatory "automatic
top-up" loans, the risk of algorithmic bias in credit scoring remains
an area requiring ongoing vigilance from the Nigeria Data Protection
Commission (NDPC).
- Surveillance
and the Chilling Effect on Civic Space: Concerns about the use of
digital surveillance tools, including potential AI-powered facial
recognition, have been a recurring theme in Nigerian civic discourse,
particularly surrounding the #EndSARS protests. Some legal observers and
civil society organizations have claimed these technologies were used to
identify protesters. These are serious allegations that, as legal experts
note, demand transparent, evidence-based public accountability. Regardless
of the veracity of any single claim, the perception of
being watched by an unaccountable AI can have a chilling effect on
legitimate civic participation and freedom of assembly.
- Academic
Integrity and the Need for Pedagogical Shift: Nigerian tertiary
institutions are grappling with a surge in students using AI tools like
ChatGPT to complete assignments. Studies available on platforms like
ResearchGate are beginning to document these challenges. Framing this
solely as "cheating" misses the larger point. The rise of AI
forces a necessary conversation about what skills are being assessed. A
punitive approach reliant on flawed AI-detection tools is less effective
than a pedagogical shift towards in-class assessments, oral defences,
project-based learning with process logs, and teaching students how to
critically use AI as a tool rather than outsourcing their
thinking to it.
What Needs to Be Done: Building Digital Trust
Infrastructure
Nigeria does not need more hype about AI. It needs a
pragmatic, multi-stakeholder effort to build the infrastructure of trust. These
are not emergency measures but fundamental, long-term investments.
- Targeted
Regulation for High-Risk Sectors: Move beyond general warnings.
Regulators like ARCON, the FCCPC, and the Securities and Exchange
Commission (SEC) should co-develop mandatory provenance and verification
protocols for ads in high-risk categories: financial investments, loans,
and health products. This requires platform-side verification and legally
enforceable, rapid takedown mechanisms. ARCON’s public stance is a vital
first step, but it needs a scalable, cross-platform operational workflow.
- Adopt
and Adapt National Technical Standards: Nigeria should formalise
and mandate a national standard for synthetic media labelling (e.g.,
digital watermarks and disclosure rules). Aligning this with the work of
bodies like NITDA’s National Centre for Artificial Intelligence and
Robotics (NCAIR) ensures that courts, regulators, and technology platforms
are working from a shared technical and legal definition, making
enforcement feasible.
- Operationalise
a Cross-Sectoral Fraud Fusion Centre: The time between a fraud
report and action (like freezing a bank account) is often measured in days
or weeks—an eternity in a digital scam. A formal, secure fusion desk with
direct liaison officers from telecom companies (under NCC guidance), banks
(under CBN guidance), and major digital platforms could shorten that
window to minutes.
- Embed
Digital Literacy in Existing Community Networks: Abstract digital
literacy campaigns have limited reach. Nigeria can leverage its existing
community infrastructure—NYSC corps members, trade unions, faith-based
organisations—to teach five simple, actionable verification rules: verify
the identity of the sender, verify the destination account number, verify
the source of a video, verify the urgency of a request, and verify the
history of a platform.
- Demand
Public-Sector Transparency: When government officials or
institutions claim a piece of media is a "deepfake," they
should, where possible, publish the verification method and evidence.
Without this, the accusation of "fake" risks becoming a
convenient political tool to dismiss genuine criticism or inconvenient
truths, further eroding public trust.
- Reform
University Assessment, Not Just Policing: The National
Universities Commission (NUC) and individual institutions should lead a
review of assessment methods. The focus should shift toward evaluating
process and critical thinking—through oral defences, supervised work, and
project portfolios—rather than simply policing the final output, which AI
can easily generate.
Nigeria must reframe its approach to AI from a futuristic tech trend to a present-day challenge of digital trust infrastructure. The misuse of AI is not a separate problem; it is an accelerant for the fires of fraud and division we have long fought. Push your bank to explain its fraud detection protocols. Ask your professional body to develop AI-use guidelines. Demand that your children's schools teach critical thinking about digital media. Regulators must move from issuing warnings to enforcing accountability. Trust is not a given—it is infrastructure that must be consciously and collectively built, right now
Comments
Post a Comment