Unveiling LummaC2 stealer’s novel Anti-Sandbox technique: Leveraging trigonometry for human behavior detection
The Malware-as-a-Service (MaaS) model, and its readily available scheme, remains to be the preferred method for emerging threat actors to carry out complex and lucrative cyberattacks. Information theft is a significant focus within the realm of MaaS, with a specialization in the acquisition and exfiltration of sensitive information from compromised devices, including login credentials, credit card details, and other valuable information. This illicit activity represents a considerable threat that can lead to substantial financial losses for both organizations and individuals.
In this blogpost, we will take a deep dive into a new Anti-Sandbox technique LummaC2 v4.0 stealer is using to avoid detonation if no human mouse activity is detected. To be able to reproduce the analysis, we will also assess the packer and LummaC2 v4.0 new Control Flow Flattening obfuscation (present in all samples by default) to effectively analyze the malware. Analysis of the packer is also relevant, as the threat actor selling LummaC2 v4.0 strongly discourages spreading the malware in its unaltered form.
LummaC2 v4.0 updates
LummaC2 is an information stealer written in C language sold in underground forums since December 2022. KrakenLabs previously published an in-depth analysis of the malware assessing LummaC2’s primary workflow, its different obfuscation techniques, and how to overcome them to effectively analyze the malware with ease. The malware has since gone through different updates and is currently on version 4.0.
Some of these significant updates include:
- Control Flow Flattening obfuscation implemented in default builds (even without using a packer).
- New Anti-Sandbox technique to delay detonation of the sample until human mouse activity is detected.
- Strings are now XOR encrypted instead of simply modified by adding junk strings in the middle.
- Supports dynamic configuration files retrieved from the C2. The configuration is Base64 encoded and XORed with the first 32 bytes of the configuration file.
- Enforces threat actors to use a crypter for their builds.
Some of these functionalities have been covered in recent publications.
Packer
The malware that is going to be analyzed during these lines comes from the sample b14ddf64ace0b5f0d7452be28d07355c1c6865710dbed84938e2af48ccaa46cf. The initial component within the sample is the Packer, which serves as the outer layer of LummaC2 v4.0. Its primary function is to obfuscate the malicious payload and facilitate its execution during runtime without the need for spawning additional processes in the system (using CreateThread instead). The packer’s architecture consists of two distinct layers.
We will now delve into the steps the malware takes to execute the payload through these layers, as well as the various techniques it employs to hinder and slow down the analysis process.
Layer 1
The first layer uses a lot of assembly junk instructions, they differ between packed samples but do not execute any relevant action. Then it will use obfuscation techniques like push+ret and jz+jnz to break disassembly and complicate the analysis, as it can be seen in the following figure:
The malware then executes a lot of junk instructions that do not alter the functionality of the program and proceeds to enter in a big and useless loop; which could be considered as an “anti-emulation” technique. It also checks that the value calculated at the end of the loop is the one expected to ensure the loop is actually performed. It then continues by creating a Mutex to ensure no more copies of the malware run at the same time in the infected machine. The Mutex name varies between samples.
After this initial logic, the packer starts by decrypting a 15 characters API from kernel32.dll. The string is encrypted and built in the stack byte per byte. The API being resolved is “VirtualProtect” used to give PAGE_EXECUTE_READWRITE protections to a fixed address that will contain the second layer.
After VirtualProtect is successfully executed, it proceeds to decrypt the second stage with fixed hardcoded size of 6556 bytes. Once the layer has been decrypted, execution is transferred via call instruction with the fixed address of the second layer. The following picture shows the decryption algorithm for the next layer:
Layer 2
This layer is very similar to the first one as it uses the same obfuscation techniques to break disassembly and slow down analysis. However, this second stage will eventually extract, decrypt and execute LummaC2 v4.0. This is accomplished by loading a resource using LoadResource and LockResource. The Resource Name is hardcoded as “3” and contains the final stage (encrypted).
The resource is decrypted using a very similar algorithm from the first layer, constants change but the algorithm remains the same.
Once the resource has been decrypted, the packer will check if this resource is a PE file, (it checks if “PE” magic is present at offset 0x3C from the base address of the decrypted resource). If the check fails, the program finishes abruptly. This shows how the packer is only expecting a PE in its resources.
A copy of the decrypted resource is then made into an allocated buffer. At this point, we can safely dump the unpacked memory just after it has been written decrypted in the allocated buffer (and before relocations are made). We will have now the unpacked LummaC2 v4.0 stealer, which we can dump for further analysis.
The packer finally loads this new PE in its process virtual address space by copying the payload and applying relocations. Finally, it will execute its Original EntryPoint via CreateThread using NTHeaders->OptionalHeader.AddressOfEntryPoint as the ThreadRoutine parameter.
This will transfer execution to the unpacked LummaC2 v4.0. The following figure shows the main routine of the malware:
Control Flow Flattening
Control Flow Flattening is an obfuscation technique aimed at breaking the original flow of the program and complicating its analysis. Furthermore, it makes use of opaque predicates and dead code to complicate analysis and make identification of relevant blocks more difficult.
Opaque predicates are employed to introduce intricacy into the primary program logic. Typically, this is achieved by creating additional branches within the control flow, often through the utilization of conditional jumps. Even though these conditional jumps are present, the overall behavior of the program remains deterministic.
Dead code refers to parts of the malicious code that are inactive or unreachable during the execution of the malware. Dead code may include obsolete functions, unused variables, or entire blocks of instructions that do not contribute to the malware’s intended functionality. In LummaC2 v4.0 some dead code blocks can be recognized by the use of calls to other identified routines of the analyzed malware with invalid parameters, for instance.
The following picture represents the control flow of a program before and after Control Flow Flattening has been applied. As can be seen, this obfuscation technique is used to break the flow of a program by flattening it.
This detailed depiction of Qarkslab provides a more in-depth exploration of the distinct elements within a control flow graph that has been subjected to Control Flow Flattening obfuscation.
When dealing with this form of obfuscation, the initial step involves identifying the main dispatcher, the relevant blocks, and the predispatcher.
The main dispatcher represents the program block where execution always returns after completing one of its branches.
The relevant blocks encompass those blocks integral to the functional program, as evident in the initial representation, such as Block 1 and Block 2, etc.
The predispatcher holds the responsibility of altering the parameter’s value, which the main dispatcher evaluates to determine the redirection of the execution flow toward a specific branch.
We will now delve into the initial routine executed by the unpacked LummaC2 v4.0, which incorporates an anti-sandboxing technique that we will examine in more detail shortly.
Control structure on stack
Trying to identify relevant blocks is not easy when CFF has been applied. We are going to analyze how this routine manages its control flow in order to make analysis easier. LummaC2 v4.0 will mostly store the value needed for the control flow dispatchers (sometimes more than one value is used to redirect the flow of the program) either in a local variable or at an offset to a memory location pointed to by a register.
The following picture shows the first instructions executed in the routine, (which is the responsible for executing anti-sandbox protection), the “prologue”. As can be seen, an extra stack space of 0xD8 bytes is reserved and assigned to ESI register. This register will be used throughout the entire routine accessing at offsets relative to it.
To ease analysis, we can create a Structure that will hold a maximum of the extra stack space allocated and fill its values while we are reverse engineering the sample.
Relevant blocks
Unit42 researchers published an IDA Python script to help reverse engineering complex code. The following code is a slightly modified version of the one presented so that it runs in IDA Python 7.4+ and allows to execute in a running debugging session providing an end_address.
def init_heads(start_addr=None, end_addr=None):
if not start_addr or not end_addr:
start_addr = idc.get_segm_start(idc.get_screen_ea())
end_addr = idc.get_segm_end(idc.get_screen_ea())
heads = Heads(start_addr, end_addr)
for i in heads:
idc.set_color(i, CIC_ITEM, 0xFFFFFF)
def get_new_color(current_color):
colors = [0xffe699, 0xffcc33, 0xe6ac00, 0xb38600]
if current_color == 0xFFFFFF:
return colors[0]
if current_color in colors:
pos = colors.index(current_color)
if pos == len(colors)-1:
return colors[pos]
else:
return colors[pos+1]
return 0xFFFFFF # Default color
def main_debug(end_addr):
if not end_addr:
print("[!] Error, end_addr for main_debug not specified.")
return
idc.enable_tracing(TRACE_STEP, 1)
event = ida_dbg.wait_for_next_event(WFNE_ANY|WFNE_CONT, -1) # Continue the debugger, now configured for step tracing
while True:
event = ida_dbg.wait_for_next_event(WFNE_ANY, -1)
addr = idc.get_event_ea()
if end_addr:
if addr == end_addr:
print("[.] End address reached.")
break
current_color = idc.get_color(addr, CIC_ITEM)
new_color = get_new_color(current_color)
idc.set_color(addr, CIC_ITEM, new_color)
if event <= 1:
print("[.] Finished debugging.")
break
ida_dbg.suspend_process() # Wait for debugger to reach this point. PROGRAM FINISHED after this
if __name__ == '__main__':
start_addr = None
end_addr = 0x011D21B8
init_heads(start_addr, end_addr) # Set white color (default)
main_debug(end_addr)
With this script, we can (in a debugging session) select an end_address and let the debugger execute until this address is reached (or the debugging session is finished by other means). As the debugger runs, it will color the instructions it executes. The darker the instruction looks, the more times the debugger executed it.
As a result, we can easily identify those code blocks that got executed multiple times (they may be pre-dispatcher blocks), those that got executed only once and even detect dead code blocks that never get executed. This can significantly improve the reversing experience of the malware sample, especially if we can avoid looking at dead code.
New Anti-Sandbox technique: Using trigonometry to detect human behavior
LummaC2 v4.0 makes use of a novel anti-sandbox technique that forces the malware to wait until “human” behavior is detected in the infected machine. This technique takes into consideration different positions of the cursor in a short interval to detect human activity, effectively preventing detonation in most analysis systems that do not emulate mouse movements realistically.
The malware first starts by getting the initial position of the cursor with GetCursorPos() and waits in a loop for 300 milliseconds until a new capture of the cursor position is different from the initial one.
Once it has detected some mouse movement (as the cursor position has changed) it will save the next 5 positions of the cursor executing GetCursorPos() with a small Sleep of 50 milliseconds between them.
After these 5 cursor positions have been saved, (let’s name them for now on P0, P1, …, P4), the malware checks if every captured position is different from its preceding one. For instance, it checks if ((P0 != P1) && (P1!= P2) && (P2 != P3) && (P3 != P4)). If this condition is not met, LummaC2 v4.0 will start again capturing a new “initial position” with a Sleep of 300 milliseconds until mouse movement is found and save new 5 cursor positions. This process will repeat forever until all consecutive cursor positions differ.
At this point we know the malware ensures that detonation will not occur until mouse is moved “quickly”. (Sandbox solutions that emulate mouse with less frequency will never detonate the malware).
After checking that all 5 captured cursor positions meet the requirements, LummaC2 v4.0 uses trigonometry to detect “human” behavior. If it does not detect this human-like behavior, it will start the process all over again from the beginning. The following lines explain the logic the malware follows.
Let’s assume here we have our 5 captured cursor positions (with a gap of 50 milliseconds between them) named P0, P1, …, P4.
LummaC2 is going to treat these coordinates points as Euclidean vectors. As a result, captured mouse positions form 4 different vectors: P01, P12, P23, P34.
LummaC2 v4.0 then calculates the angle that is formed between consecutive vectors P01-P12, P12-P23 and P23-P34 sequentially. Here we can see an example of the different angles that are calculated:
To get the magnitude of a vector (shortest distance between two points), the Euclidean distance formula is used. To calculate the angle between two Euclidean vectors, it makes use of the dot product of vectors and transforms the result from radians to degrees. This can be seen in the following figures:
The result is finally converted to degrees and compared against a threshold of 45.0º degrees, which is hardcoded in the malware sample and initialized at the beginning of the anti-sandbox routine (as it can be seen in previous Figure 13).
If all the calculated angles are lower than 45º, then LummaC2 v4.0 considers it has detected “human” mouse behavior and continues with its execution. However, if any of the calculated angles is bigger than 45º, the malware will start the process all over again by ensuring there is mouse movement in a 300-millisecond period and capturing again 5 new cursor positions to process.
In the following figure we can see how the anti-sandbox check is performed, once obtained the angle between two vectors in angle_between_vectors_degress. The difference_threshold is initialized at the beginning of the routine (which can be seen in Figure 13).
How to bypass LummaC2 v4.0 Anti-Sandbox technique
As we have seen, this technique allows the malware to delay detonation until “human” mouse movement is detected. Moreover, this mouse movement must be continuous and smooth. Moving the cursor tortuously, quickly to random positions, circles or changing abruptly the direction of the cursor movement, etc., will most likely result in the malware never detonating.
In order to detonate LummaC2 v4.0, mouse movement must be continuous (5 different positions will be captured in approximately 0.25 seconds) and it must be a smooth movement (without changing directions abruptly), given that angles between the vectors formed by these positions cannot exceed 45 degrees.
Forcing threat-actors to use a crypter for their builds
As we saw in earlier LummaC2 advertisements in underground forums, protecting the malware with a crypter was recommended to avoid leaking the malware anywhere in its pure form.
Newer versions of the malware added a new feature to avoid leaking the unpacked samples. In the case in which the sample being executed is not packed, the infected user will be presented with the following alert that allows them to prematurely finish the execution of the malware without any harm being caused to the infected machine.
This shows the lengths at which the malware developer is willing to go to avoid leaking unpacked samples.
LummaC2 v4.0 detects if the running executable is crypted by searching for a specific value at certain offset from the PE file. If this value is found, then the sample is not crypted. As we have previously seen, the packer sample does not create a new process to run the malware but uses a thread instead. Analyzing or executing the unpacked sample will result in this message box being displayed.
It will start by executing GetModuleFileNameW with hModule parameter 0 (to retrieve the path of the executable file of the current process) and obtaining a handle to the file for future reading via CreateFileW with dwDesiredAccess parameter set to GENERIC_READ. Then, it will get the size of the file with GetFileSizeEx and allocate a buffer of the obtained size using malloc. Following, a call to ReadFile is executed to fill the buffer with the contents of the executable file of the current process, as shown in the following Figure:
Afterwards, it will check using memcmp if a hardcoded mark is present at a hardcoded offset plus 8 bytes. If the contents match, the user is presented with the alert preventing detonation via NtRaiseHardError.
This hardcoded mark (which changes between samples) has been previously hashed to check for file integrity. If this integrity check does not match, the malware ends abruptly. However, this should not be any issue if the unpacking of the malware was successful.
The following picture shows the hardcoded mark and the hardcoded offset. As well as the contents of the unpacked version of the malware, which would show the alert upon execution.
Conclusion
Information Stealers such as LummaC2 v4.0 pose significant risks and have the potential to inflict substantial harm on both individuals and organizations, including privacy breaches and the unauthorized exposure of confidential data. As our analysis has revealed, LummaC2 v4.0 appears to be a dynamic malware strain that remains under active development, constantly enhancing its codebase with additional features and improved obfuscation techniques, along with updates to its control panel. The ongoing usage of this malware in real-world scenarios indicates that it will likely continue to evolve, incorporating more advanced features and security measures in the future. This evolution is further facilitated by an open channel for customers to request bug fixes and propose enhancements.
Outpost24’s KrakenLabs will continue to analyze new malware samples as part of Outpost24’s Threat Intelligence solution, which can retrieve compromised credentials in real-time to prevent unauthorized access to your systems. Interested to see how Threat Intelligence can fit in with your organization? Request a demo today.
IOCs
Hashes
LummaC2 v4.0 (sample 1)
- b14ddf64ace0b5f0d7452be28d07355c1c6865710dbed84938e2af48ccaa46cf (packed)
- 4408ce79e355f153fa43c05c582d4e264aec435cf5575574cb85dfe888366f86 (unpacked)
LummaC2 v4.0 (sample 2)
- de6c4c3ddb3a3ddbcbea9124f93429bf987dcd8192e0f1b4a826505429b74560 (packed)
- 976c8df8c33ec7b8c6b5944a5caca5631f1ec9d1d528b8a748fee6aae68814e3 (unpacked)
C&Cs
curtainjors[.]fun
gogobad[.]fun
superyupp[.]fun
References
“The Case of LummaC2 v4.0”, September, 2023. [Online]. Available: https://www.esentire.com/blog/the-case-of-lummac2-v4-0 [Accessed November 05, 2023]
“Everything you need to know about the LummaC2 stealer: leveraging IDA Python and Unicorn to deobfuscate Windows API hashing”, April, 2023. [Online]. Available: https://outpost24.com/blog/everything-you-need-to-know-lummac2-stealer/ [Accessed November 05, 2023]
“Using IDAPython to Make Your Life Easier: Part 4”, January, 2016. [Online]. Available: https://unit42.paloaltonetworks.com/using-idapython-to-make-your-life-easier-part-4 [Accessed November 05, 2023]
“A cryptor, a stealer and a banking trojan”, September, 2023. [Online]. Available: https://securelist.com/crimeware-report-asmcrypt-loader-lumma-stealer-zanubis-banker/110512/ [Accessed November 05, 2023]
“Deobfuscation: recovering an OLLVM-protected program”, December, 2014. [Online]. Available: https://blog.quarkslab.com/deobfuscation-recovering-an-ollvm-protected-program.html [Accessed November 05, 2023]
“The Rise of the Lumma Info-Stealer”, September, 2023. [Online]. Available: https://darktrace.com/blog/the-rise-of-the-lumma-info-stealer [Accessed November 05, 2023]