In June, Xavier Mertens found a malicious PowerShell script stealing data from cryptocurrency-related browser extensions and documented this on the SANS ISC blog. The malware itself is pretty simple: review extensions for cryptocurrency related ones, steal any important data (keys, etc.), and report that up to the mothership. As an unobfuscated PowerShell script, it also had an absurdly low detection rate at the time (1/53).

People don’t have a bad habit of running random PowerShell scripts though, and Mertens didn’t cover how this script was originally invoked - only that wmail-endpoint.com was responsible for C2. A couple weeks later, a possiblity would emerge with a thread on MalwareRemoval forums where the user shows that an autorun job references a short script hidden in System32 driver files, which downloads and executes data over HTTP from wmail-service.com.

These domains would be publicly tied together in the beginning of August by Pluribus One, in a report by Igino Corona about what appears to be a matured version of the same cryptostealer:

  • The C2 client script is no longer hidden in drivers - a startup task instead reads the script from the Windows Registry, xors each byte with b3, then executes the result.
  • The C2 client now uses a simple Domain Generation Algorithm (DGA) to determine what C2 servers to connect to.

Corona notes that the DGA domains are receiving significant traffic, and at time of writing most of them were present in Cisco Umbrella’s top 1 million domains, hovering around 300,000th most popular. This could potentially indicate that the malware is running rampant.

When I noticed today that some of the domains used by this DGA were available to purchase, so I did what any reasonable person would do: I registered as many of them as I could (nine!) in one of my personal AWS accounts.

Here’s what I did while building out my scalable pseudo-C2, what traffic I actually saw, and the (many) unsolved mysteries that remain.

DNS Traffic

The first and most obvious thing to do after registering these domains was to enable public DNS query logging in Route53, so I could identify any points of interest that we don’t already know about (ex. if subdomains were used by other generations of malware). Below, we can see that the DNS traffic generated by this malware was massive and surprisingly consistent, peaking around 6,000 queries per minute per domain. Given that I’ve bought nine of the C2 domains, this was roughly 3.25 million DNS requests served from my AWS account per hour.1

I am currently withholding comments on how significant the change in the day/night cycle is until I have more data.

The queries themselves are fairly predictable, with one outlier. Almost all queries were for the root of the domain - which is expected based on the DGA we know about, but it was nice to confirm - but there were also a surprising number of TXT queries. Below, we see that just under 10% of the total queries to Route53 in a given hour were TXT.

I don’t have an explanation for this phenomenon even while writing this, though I do plan to return to tweaking TXT records eventually.

HTTP Traffic

Since there doesn’t seem to be anything else interesting in DNS, I moved on to making something to stand-in for the C2 server. Based on samples I’ve seen so far, it looks like this malware only ever downloaded payloads over HTTP. Using CloudFront and S3 will hopefully make building our pseudo-C2 easy.

Also, if we’re being honest, I had no idea how much traffic we’re looking at here.

  • High estimate: If 1 DNS request roughly equals 1 attempted HTTP connection, we could be looking at 54k requests per minute or ~900 requests per second. If there are many DNS requests we’re not seeing due to public DNS caching,1 then there could be even more requests than predicted.
  • Low estimate: If there are no other C2 servers online, that DNS query storm above could be from all existing malware searching desperately for a C2, and it would wind down quickly after a live C2 is found.

Initial Traffic

I stood up a CloudFront distribution (listening on HTTP and HTTPS, without redirecting) and created an alias2 to it in Route53, and intentionally made the origin an empty S3 bucket. This would allow me to see what paths were being requested by the malware, and would not (…yet) serve any PowerShell to anything. Traffic to the distribution quickly stabilized, but at a much lower volume than expected - only around 220 requests per minute total.

This is in stark contrast to the 54k DNS queries per minute total across all the domains I’ve purchased (>90% of which are A or AAAA queries), and it’s not clear why this is happening. The only caveat of using CloudFront for this is that it only supports listening on ports 80 and 443, but there isn’t any indication so far that the malware is connecting to the C2 on nonstandard ports.3

Pseudo-C2

The actual requests made are expected - a random identifier is selected, then used to

C2-Managed Traffic

C2-Controlled Hosts

  • 500ish hosts
  • geographic distribution?

Remaining Questions

  • what is going on with DNS, there HAS to be something else here
  • compare DNS and CloudFront
  • diverting some traffic e.g. to try listening on more ports
  • doubt I can disinfect anything

IOCs

a full list is here

Author’s Notes

  1. Keep in mind, this is the number of requests hitting Route53 - it does not include cached requests served by public DNS servers that an end user’s computer would be using.  2

  2. I’m using an alias record instead of creating a long-lived CNAME record to reduce query volume because AWS considers all queries that return an alias free when CNAME responses would cost $0.40/million. While the query volume to my authoritative nameservers might be 10% or even 1% due to public DNS servers caching a long-lived CNAME response, it still wouldn’t be free. 

  3. Checking IPs that this C2 is associated with such as 154.53.51.77 with nmap (current) and Shodan (historical), I don’t see any abnormal ports running HTTP either.