In January 2026, security researchers identified two malicious Python packages on PyPI — spellcheckerpy and spellcheckpy — that pretended to be connected to the legitimate pyspellchecker project. They looked credible enough to fool developers, but were designed to deliver a remote access trojan instead.
That may sound like a narrow developer problem.
It isn’t.
This is a clear example of a wider business issue: modern software teams increasingly rely on open-source packages they did not write, do not fully inspect, and often install in seconds. When one of those packages is malicious — or simply looks close enough to a trusted one — the risk is not theoretical. It can become a direct route into systems, credentials, and sensitive development environments.
What happened
According to Aikido, the malicious packages copied the identity of the legitimate author and linked to the real GitHub repository, making them appear trustworthy at a glance. Inside, however, the attacker hid a base64-encoded payload inside a dictionary resource file, rather than in the places security teams usually expect to find malicious logic. Later, one version added an obfuscated trigger so that simply importing and instantiating the spellchecker could execute the malware.
For non-technical readers, the simplest way to understand this is:
A developer thought they were installing a harmless spelling package.
What they may actually have installed was a hidden backdoor.
That matters because the attack did not depend on breaking into the victim’s company first. It depended on getting someone to trust the wrong package name.
Why this matters
The malware was not just nuisance code. Aikido says it downloaded and launched a Python-based remote access trojan that could fingerprint the machine, contact a command-and-control server every few seconds, and execute attacker-supplied code remotely. Independent reporting also said the packages were downloaded more than 1,000 times before removal.
That is the real issue for leadership teams.
This is not just “a developer installed the wrong library.”
It is:
- potential access to a developer workstation
- potential exposure of credentials or tokens
- potential compromise of internal code, environments, or pipelines
- and potential risk spreading from one mistaken install into the wider software estate
In other words, a small naming trick at the edge of the development process can become a much bigger business problem.
What makes this attack worth paying attention to
There are always malicious packages in open-source ecosystems. What makes this case interesting is not only that the packages were fake, but that the payload was hidden in a place that looked legitimate for that kind of library: a language dictionary file. Aikido also reported that the attacker first published “dormant” versions containing the payload without activating it, and only later enabled execution in version 1.2.0.
That is a useful reminder that software risk is not always loud or obvious.
Sometimes the danger is not a flashing alarm.
Sometimes it is a package that looks normal, behaves normally at first glance, and only reveals its purpose later.
That is exactly why dependency risk is so hard to manage by intuition alone.
The bigger leadership lesson
For technical teams, this is a package hygiene and software supply chain problem.
For non-technical leaders, the lesson is broader:
your business is exposed not only through the code your team writes, but through the code your team imports, trusts, and builds on top of.
That includes:
- open-source packages
- indirect dependencies
- outdated or abandoned components
- typosquatted packages
- build-time and install-time behaviors
- developer workstation exposure
This is where many organisations still have a blind spot. They may feel comfortable talking about “our application,” but have much less visibility into the outside code that quietly enters the environment through normal development work.
And that is the real risk.
Because if you cannot see what is flowing into your software estate, you cannot properly assess what you are trusting.
Where The Code Registry fits
This is where TCR can enter the conversation naturally.
The Code Registry is not claiming it would stop every malicious package from ever being installed. That would be the wrong claim.
The stronger and more credible point is this:
Incidents like this show why organisations need visibility into the software they rely on — not just the code they authored themselves.
When a business depends on software, it needs to understand:
- what components and dependencies exist in its estate
- where those components came from
- what risk sits inside them
- and how to explain that exposure clearly to technical and non-technical stakeholders
That is especially important now, because modern software is rarely built from scratch. It is assembled from layers of internal code, open-source packages, frameworks, APIs, and third-party components. If leadership only understands the top layer, it is making decisions with partial visibility.
That is not a technical problem alone. It is a governance problem.
Trust is not the same as verification
The uncomfortable truth behind this story is simple:
A package can look legitimate.
It can resemble a trusted name.
It can even point to the real upstream project.
And it can still be malicious.
That is why software trust now has to be backed by evidence.
Not paranoia.
Not guesswork.
Evidence.
For developers, that means better dependency controls and scrutiny.
For leadership, it means recognising that software assurance now includes provenance, visibility, and supply chain awareness.
Because in 2026, one wrong package name is sometimes all it takes to create a very real business risk.
The fake spellchecker packages on PyPI are a reminder that software risk does not begin and end with the code your team writes. It also lives in dependencies, package ecosystems, and the trust decisions made every day in development workflows. That is why organisations need more than assumptions. They need visibility. They need evidence. They need to know their code.
Know Your Code. Know What’s In It. Know What You’re Trusting.
Want to Learn More?