In our recent paper, we examined memory acquisition in details and tested a bunch of tools. Memory acquisition tools have to achieve two tasks to be useful:
- They need to be able to map a region of physical memory into the virtual address space, so it can be read by the tool.
- They need to know where in the physical address space it is safe to read. Reading a DMA mapped region will typically crash the system (BSOD).
Since PCI devices are able to map DMA buffers into the physical address space, it is not safe to read these buffers. When a read operation occurs on the memory bus for these addresses, the device might become activated and cause a system crash or worse. The memory acquisition tool needs to be able to avoid these DMA mapped regions in order to safely acquire the memory.
Let us see what occurs when one loads the memory acquisition driver. Since our goal is to play around with memory modification, we will enable write support for the winpmem acquisition tool (This example uses a Windows 7 AMD64 VM):
In [2]:
!c:/Users/mic/winpmem_write_1.5.5.exe -l -w
We see that winpmem extracts its driver to a temporary location, and loads it into the kernel. It then reports the value of the Control Register CR3 (This is the kernel's Directory Table Base - or DTB).
Next we see that the driver is reporting the ranges of physical memory available on this system. There are two ranges on this system with a gap in between. To understand why this is let's consider the boot process:
- When the system boots, the BIOS configures the initial physical memory map. The RAM in the system is literally installed at various ranges in the physical address space by the BIOS.
- The operating system is booted in Real mode, at which point a BIOS service interrupt is issued to query this physical memory configuration. It is only possible to issue this interrupt in Real mode.
- During the OS boot sequence, the processor is switched to protected mode and the operating system continues booting.
- The OS configures PCI devices by talking to the PCI controller and mapping each PCI device's DMA buffer (plug and play) into one of the gaps in the physical address space. Note that these gaps may not actually be backed by any RAM chips at all (which means that a write to that location will simply not stick - reading it back will produce 0).
The important thing to take from this is that the physical memory configuration is done by the machine BIOS on its own (independent of the running operating system). The OS kernel needs to live with whatever configuration the hardware boots with. The hardware will typically install some gaps in the physical address range so that PCI devices can be mapped inside them (Some PCI devices can only address 4GB so there must be sufficient space in the lower 4GB of physical address space for these.).
Since the operating system can only query the physical memory map when running in real mode, but needs to use it to configure PCI devices while running in protected mode, there must be a data structure somewhere which keeps this information around. When WinPmem queries for this information, it can not be retrieved directly from the BIOS - since the machine is already running in protected mode.
The usual way to get the physical memory ranges is to call MmGetPhysicalMemoryRanges(). This is the function API:
PPHYSICAL_MEMORY_DESCRIPTOR NTAPI MmGetPhysicalMemoryRanges(VOID);
We can get Rekall to disassemble this function for us. First we initialize the notebook, opening the winpmem driver to analyze the live system. Since Rekall uses exact profiles generated from accurate debugging information for the running system, it can resolve all debugging symbols directly. We therefore can simply disassemble the function by name:
In [2]:
from rekall import interactive
interactive.ImportEnvironment(filename=r"\\.\pmem")
In [3]:
dis "nt!MmGetPhysicalMemoryRanges"
Note that Rekall is able to resolve the addresses back to the symbol names by using debugging information. This makes reading the disassembly much easier. We can see that this function essentially copies the data referred to from the symbol nt!MmPhysicalMemoryBlock into user space.
Lets dump this memory:
In [8]:
dump "nt!MmPhysicalMemoryBlock", rows=2
This appears to be an address, lets dump it:
In [10]:
dump 0xfa8001793fd0, rows=4
The data at this location contains a struct of type _PHYSICAL_MEMORY_DESCRIPTOR which is also the return value from the MmGetPhysicalMemoryRanges()call. We can use Rekall to simply construct this struct at this location and print out all its members.
In [12]:
memory_range = session.profile._PHYSICAL_MEMORY_DESCRIPTOR(0xfa8001793fd0)
print memory_range
In [13]:
for r in memory_range.Run:
print r
So what have we found?
- There is a symbol called nt!MmPhysicalMemoryBlock which is a pointer to a _PHYSICAL_MEMORY_DESCRIPTOR struct.
- This struct contains the total number of runs, and a list of each run in pages (0x1000 bytes long).
Lets write a Rekall plugin for this:
In [15]:
from rekall.plugins.windows import common
class WinPhysicalMap(common.WindowsCommandPlugin):
"""Prints the boot physical memory map."""
__name = "phys_map"
def render(self, renderer):
renderer.table_header([
("Physical Start", "phys", "[addrpad]"),
("Physical End", "phys", "[addrpad]"),
("Number of Pages", "pages", "10"),
])
descriptor = self.profile.get_constant_object(
"MmPhysicalMemoryBlock",
target="Pointer",
target_args=dict(
target="_PHYSICAL_MEMORY_DESCRIPTOR",
))
for memory_range in descriptor.Run:
renderer.table_row(
memory_range.BasePage * 0x1000,
(memory_range.BasePage + memory_range.PageCount) * 0x1000,
memory_range.PageCount)
This plugin will be named phys_map and essentially creates a table with three columns. The memory descriptor is created directly from the profile, then we iterate over all the runs and output the start and end range into the table.
In [16]:
phys_map
So far, this is a pretty simple plugin. However, lets put on our black hat for a sec.
In our DFRWS 2013 paper we pointed out that since most memory acquisition tools end up calling MmGetPhysicalMemoryRanges() (all the ones we tested at least), then by disabling this function we would be able to sabotage all memory acquisition tools. This turned out to be the case, however, by patching the running code in memory we would trigger Microsoft's Patch Guard. In our tests, we disabled Patch Guard to prove the point, but this is less practical in a real rootkit.
In reality, a rootkit would like to be able to modify the underlying data structure behind the API call itself. This is much easier to do and wont modify any kernel code, thereby bypassing Patch Guard protections.
To test this, we can do this directly from Rekall's interactive console.
In [18]:
descriptor = session.profile.get_constant_object(
"MmPhysicalMemoryBlock",
target="Pointer",
target_args=dict(
target="_PHYSICAL_MEMORY_DESCRIPTOR",
)).dereference()
print descriptor
Since we loaded the memory driver with write support, we are able to directly modify each field in the struct. For this proof of concept we simply set the NumberOfRuns to 0, but a rootkit can get creative by modifying the runs to contain holes located in strategic regions. By specifically crafting a physical memory descriptor with a hole in it, we can cause memory acquisition tools to just skip over some region of the physical memory. The responders can then walk away thinking they have their evidence, but critical information is missing.
In [19]:
descriptor.NumberOfRuns = 0
Now we can repeat our phys_map plugin, but this time, no runs will be found:
In [20]:
phys_map
To unload the driver, we need to close any handles to it. We then try to acquire a memory image in the regular way.
In [32]:
session.physical_address_space.close()
In [2]:
!c:/Users/mic/winpmem_write_1.5.5.exe test.raw
This time, however, Winpmem reports no memory ranges available. The result image is also 0 bytes big:
In [3]:
!dir test.raw
At this point, running the dumpit program from moonsols will cause the system to immediately reboot. (It seems that dumpit is unable to handle 0 memory ranges gracefully and crashes the kernel).
In our recent paper, we examined memory acquisition in details and tested a bunch of tools. Memory acquisition tools have to achieve two tasks to be useful:
- They need to be able to map a region of physical memory into the virtual address space, so it can be read by the tool.
- They need to know where in the physical address space it is safe to read. Reading a DMA mapped region will typically crash the system (BSOD).
Since PCI devices are able to map DMA buffers into the physical address space, it is not safe to read these buffers. When a read operation occurs on the memory bus for these addresses, the device might become activated and cause a system crash or worse. The memory acquisition tool needs to be able to avoid these DMA mapped regions in order to safely acquire the memory.
Let us see what occurs when one loads the memory acquisition driver. Since our goal is to play around with memory modification, we will enable write support for the winpmem acquisition tool (This example uses a Windows 7 AMD64 VM):
In [2]:
!c:/Users/mic/winpmem_write_1.5.5.exe -l -w
We see that winpmem extracts its driver to a temporary location, and loads it into the kernel. It then reports the value of the Control Register CR3 (This is the kernel's Directory Table Base - or DTB).
Next we see that the driver is reporting the ranges of physical memory available on this system. There are two ranges on this system with a gap in between. To understand why this is let's consider the boot process:
- When the system boots, the BIOS configures the initial physical memory map. The RAM in the system is literally installed at various ranges in the physical address space by the BIOS.
- The operating system is booted in Real mode, at which point a BIOS service interrupt is issued to query this physical memory configuration. It is only possible to issue this interrupt in Real mode.
- During the OS boot sequence, the processor is switched to protected mode and the operating system continues booting.
- The OS configures PCI devices by talking to the PCI controller and mapping each PCI device's DMA buffer (plug and play) into one of the gaps in the physical address space. Note that these gaps may not actually be backed by any RAM chips at all (which means that a write to that location will simply not stick - reading it back will produce 0).
The important thing to take from this is that the physical memory configuration is done by the machine BIOS on its own (independent of the running operating system). The OS kernel needs to live with whatever configuration the hardware boots with. The hardware will typically install some gaps in the physical address range so that PCI devices can be mapped inside them (Some PCI devices can only address 4GB so there must be sufficient space in the lower 4GB of physical address space for these.).
Since the operating system can only query the physical memory map when running in real mode, but needs to use it to configure PCI devices while running in protected mode, there must be a data structure somewhere which keeps this information around. When WinPmem queries for this information, it can not be retrieved directly from the BIOS - since the machine is already running in protected mode.
The usual way to get the physical memory ranges is to call MmGetPhysicalMemoryRanges(). This is the function API:
PPHYSICAL_MEMORY_DESCRIPTOR NTAPI MmGetPhysicalMemoryRanges(VOID);
We can get Rekall to disassemble this function for us. First we initialize the notebook, opening the winpmem driver to analyze the live system. Since Rekall uses exact profiles generated from accurate debugging information for the running system, it can resolve all debugging symbols directly. We therefore can simply disassemble the function by name:
In [2]:
from rekall import interactive
interactive.ImportEnvironment(filename=r"\\.\pmem")
In [3]:
dis "nt!MmGetPhysicalMemoryRanges"
Note that Rekall is able to resolve the addresses back to the symbol names by using debugging information. This makes reading the disassembly much easier. We can see that this function essentially copies the data referred to from the symbol nt!MmPhysicalMemoryBlock into user space.
Lets dump this memory:
In [8]:
dump "nt!MmPhysicalMemoryBlock", rows=2
This appears to be an address, lets dump it:
In [10]:
dump 0xfa8001793fd0, rows=4
The data at this location contains a struct of type _PHYSICAL_MEMORY_DESCRIPTOR which is also the return value from the MmGetPhysicalMemoryRanges()call. We can use Rekall to simply construct this struct at this location and print out all its members.
In [12]:
memory_range = session.profile._PHYSICAL_MEMORY_DESCRIPTOR(0xfa8001793fd0)
print memory_range
In [13]:
for r in memory_range.Run:
print r
So what have we found?
- There is a symbol called nt!MmPhysicalMemoryBlock which is a pointer to a _PHYSICAL_MEMORY_DESCRIPTOR struct.
- This struct contains the total number of runs, and a list of each run in pages (0x1000 bytes long).
Lets write a Rekall plugin for this:
In [15]:
from rekall.plugins.windows import common
class WinPhysicalMap(common.WindowsCommandPlugin):
"""Prints the boot physical memory map."""
__name = "phys_map"
def render(self, renderer):
renderer.table_header([
("Physical Start", "phys", "[addrpad]"),
("Physical End", "phys", "[addrpad]"),
("Number of Pages", "pages", "10"),
])
descriptor = self.profile.get_constant_object(
"MmPhysicalMemoryBlock",
target="Pointer",
target_args=dict(
target="_PHYSICAL_MEMORY_DESCRIPTOR",
))
for memory_range in descriptor.Run:
renderer.table_row(
memory_range.BasePage * 0x1000,
(memory_range.BasePage + memory_range.PageCount) * 0x1000,
memory_range.PageCount)
This plugin will be named phys_map and essentially creates a table with three columns. The memory descriptor is created directly from the profile, then we iterate over all the runs and output the start and end range into the table.
In [16]:
phys_map
So far, this is a pretty simple plugin. However, lets put on our black hat for a sec.
In our DFRWS 2013 paper we pointed out that since most memory acquisition tools end up calling MmGetPhysicalMemoryRanges() (all the ones we tested at least), then by disabling this function we would be able to sabotage all memory acquisition tools. This turned out to be the case, however, by patching the running code in memory we would trigger Microsoft's Patch Guard. In our tests, we disabled Patch Guard to prove the point, but this is less practical in a real rootkit.
In reality, a rootkit would like to be able to modify the underlying data structure behind the API call itself. This is much easier to do and wont modify any kernel code, thereby bypassing Patch Guard protections.
To test this, we can do this directly from Rekall's interactive console.
In [18]:
descriptor = session.profile.get_constant_object(
"MmPhysicalMemoryBlock",
target="Pointer",
target_args=dict(
target="_PHYSICAL_MEMORY_DESCRIPTOR",
)).dereference()
print descriptor
Since we loaded the memory driver with write support, we are able to directly modify each field in the struct. For this proof of concept we simply set the NumberOfRuns to 0, but a rootkit can get creative by modifying the runs to contain holes located in strategic regions. By specifically crafting a physical memory descriptor with a hole in it, we can cause memory acquisition tools to just skip over some region of the physical memory. The responders can then walk away thinking they have their evidence, but critical information is missing.
In [19]:
descriptor.NumberOfRuns = 0
Now we can repeat our phys_map plugin, but this time, no runs will be found:
In [20]:
phys_map
To unload the driver, we need to close any handles to it. We then try to acquire a memory image in the regular way.
In [32]:
session.physical_address_space.close()
In [2]:
!c:/Users/mic/winpmem_write_1.5.5.exe test.raw
This time, however, Winpmem reports no memory ranges available. The result image is also 0 bytes big:
In [3]:
!dir test.raw
At this point, running the dumpit program from moonsols will cause the system to immediately reboot. (It seems that dumpit is unable to handle 0 memory ranges gracefully and crashes the kernel).
How stable is this?
We have just disabled a kernel function, but this might de-stabilize the system. What other functions in the kernel are calling MmGetPhysicalMemoryRanges?
Lets find out by disassembling the entire kernel. First we need to find the range of memory addresses the kernel code is in. We use the peinfo plugin to show us the sections which are mapped into memory.
In [2]:
peinfo "nt"
Now instead of disassembling to the interactive notebook, we store it in a file. This does take a while but will produce a large text file containing the complete disassembly of the windows kernel (With debugging symbols cross referenced).
In [3]:
dis offset=0xF8000261F000+0x1000, end=0xF8000261F000+0x525000, output="ntkrnl_amd64.dis"
Now we can use our favourite editor (Emacs) to check all references to MmGetPhysicalMemoryRanges. We can see references from:
- nt!PfpMemoryRangesQuery - Part of ExpQuerySystemInformation.
- nt!IoFillDumpHeader - Called from crashdump facility.
- nt!IopGetPhysicalMemoryBlock - Called from crashdump facility.
We can also check references to MmPhysicalMemoryBlock. Many of these functions appear related to the Hot-Add memory functionality:
- nt!IoSetDumpRange
- nt!MiFindContiguousPages
- nt!MmIdentifyPhysicalMemory
- nt!MmReadProcessPageTables
- nt!MiAllocateMostlyContiguous
- nt!IoFillDumpHeader
- nt!MiReleaseAllMemory
- nt!MmDuplicateMemory
- nt!MiRemovePhysicalMemory
- nt!MmAddPhysicalMemory
- nt!MmGetNumberOfPhysicalPages - This seems to be called from Hibernation code.
- nt!MiScanPagefileSpace
- nt!MmPerfSnapShotValidPhysicalMemory
- nt!MmGetPhysicalMemoryRanges
Some testing remains to see how stable this modification is in practice. It appears that probably Hot Add memory will no longer work, and possibly hibernation will fail (Hibernation is an alternate way to capture memory images, as Rekall can also operate on hibernation files). Although the above suggests that crash dumps are affected, I have tried to produce a crashdump after this modification, but it still worked as expected (This is actually kind of interesting in itself).
We have just disabled a kernel function, but this might de-stabilize the system. What other functions in the kernel are calling MmGetPhysicalMemoryRanges?
Lets find out by disassembling the entire kernel. First we need to find the range of memory addresses the kernel code is in. We use the peinfo plugin to show us the sections which are mapped into memory.
In [2]:
peinfo "nt"
Now instead of disassembling to the interactive notebook, we store it in a file. This does take a while but will produce a large text file containing the complete disassembly of the windows kernel (With debugging symbols cross referenced).
In [3]:
dis offset=0xF8000261F000+0x1000, end=0xF8000261F000+0x525000, output="ntkrnl_amd64.dis"
Now we can use our favourite editor (Emacs) to check all references to MmGetPhysicalMemoryRanges. We can see references from:
- nt!PfpMemoryRangesQuery - Part of ExpQuerySystemInformation.
- nt!IoFillDumpHeader - Called from crashdump facility.
- nt!IopGetPhysicalMemoryBlock - Called from crashdump facility.
We can also check references to MmPhysicalMemoryBlock. Many of these functions appear related to the Hot-Add memory functionality:
- nt!IoSetDumpRange
- nt!MiFindContiguousPages
- nt!MmIdentifyPhysicalMemory
- nt!MmReadProcessPageTables
- nt!MiAllocateMostlyContiguous
- nt!IoFillDumpHeader
- nt!MiReleaseAllMemory
- nt!MmDuplicateMemory
- nt!MiRemovePhysicalMemory
- nt!MmAddPhysicalMemory
- nt!MmGetNumberOfPhysicalPages - This seems to be called from Hibernation code.
- nt!MiScanPagefileSpace
- nt!MmPerfSnapShotValidPhysicalMemory
- nt!MmGetPhysicalMemoryRanges
Some testing remains to see how stable this modification is in practice. It appears that probably Hot Add memory will no longer work, and possibly hibernation will fail (Hibernation is an alternate way to capture memory images, as Rekall can also operate on hibernation files). Although the above suggests that crash dumps are affected, I have tried to produce a crashdump after this modification, but it still worked as expected (This is actually kind of interesting in itself).
PS
This note was written inside Rekall itself by using the IPython notebook interface.
This note was written inside Rekall itself by using the IPython notebook interface.
No comments:
Post a Comment