Playing Commandos 2 on a modern Windows 10 system with a 4K display revealed a major, unexpected pain point of the game: the menu will only run in 640x480, regardless of the resolution the game is set to. This compounds with the game not supporting 4K resolutions, so playing the game involves a mode switch on launch, a mode switch from the menu to the game, a mode switch from the game to the menu whenever you want to ahem save scum, and a mode switch from the game to the desktop any time you need to pretty much do anything else on your computer like answer an email or check Slack.
Naturally (for me), my first thought was to implement a render virtualization layer for this. Digging deep revealed the game to be using DirectDraw7, mostly issuing blits and doing all its transforms on the CPU for the few polygons it draws against the pre-rendered backgrounds, so I spinned up a new ‘DirectDraw7On12’ project in the spirit and naming style of D3D11On12, and it was time for some API spelunking.
The interfaces used in DirectDraw7 reflect the PC landscape of their time, and it was evident how also API design was still early and the stable, convenient D3D11 / D3D12 style interfaces were down the path but DDraw7 was an earlier evolutionary step. Consider the DDPIXELFORMAT struct - it has a flags members that specifies which members are valid, but if you look closely you’ll see that it is basically multiple structs aliased over each other via unions, and while that’s common in D3D12 for D3D12_SHADER_RESOURCE_VIEW_DESC with a ViewDimension
member and a set of aliased union types for each dimension, the DDPIXELFORMAT
seems to alias each single variable. So the first variable after the FourCC code can be RGB bit count, or Z bit depth, or one of a few other things based on flag values. Then the second variable after that is defined as an unnamed union of bit depths and bit masks, etc.
With D3D12_RESOURCE_DESC
earlier you can just expand the right member in the union based on the ViewDimension
value, but with the DDPIXELFORMAT
that cannot be done in the debugger, and even code parsing this is complex. I needed to be able to determine what a given pixel format is in the blit heavy DirectDraw API at a glance, and so I turned to Natvis and boilerplate aside (which I didn’t mind) I got to writing the visualizer for the flags field and it looked like this for the RGB flag:
<Synthetic Name="DDPF_RGB" Condition="dwFlags & 0x00000040l ">
<DisplayString>true</DisplayString>
</Synthetic>
I wrote that, looked at it, and saw that I had to escape the ampersand I realized I’m actually writing a serialized format - an uneducated guess would be some .Net serializer, but basically no human would pick this format for a medium where ampersands would be quite common, but someone trying to do loading with built-in language support and the least amount of intenralized effort would just define whatever their serialized structures are as a file format.
This was gonna get real tedious real fast, so I decided to instead write a small Domain-Specific Language which I called Typevis that outputs Natvis XML files. Code and a readme will be included, but a snippet of the input for the pixel format struct looks like this for the above Natvis code:
- 'DDPF_RGB' | dwFlags & 0x00000040l : 'true'
Basically, name
followed by a pipe because it looked like a “such that” symbol, then the condition, then what to display in the debugger output. The entire struct would look like this:
_DDPIXELFORMAT
{
'DirectDraw Pixel Format'
{
- 'Flags' : '{dwFlags,x}'
{
- 'DDPF_ALPHAPIXELS' | dwFlags & 0x00000001l : 'true'
- 'DDPF_ALPHA' | dwFlags & 0x00000002l : 'true'
- 'DDPF_FOURCC' | dwFlags & 0x00000004l : 'true'
- 'DDPF_PALETTEINDEXED4' | dwFlags & 0x00000008l : 'true'
- 'DDPF_PALETTEINDEXEDTO8' | dwFlags & 0x00000010l : 'true'
- 'DDPF_PALETTEINDEXED8' | dwFlags & 0x00000020l : 'true'
- 'DDPF_RGB' | dwFlags & 0x00000040l : 'true'
[SNIP - other flags]
}
- 'RGB bit depth' | dwFlags & 0x00000040l : '{dwRGBBitCount}'
}
}
This way you can expand flags, and see things like RGB bit depth if and only if the RGB bit it set. This made things a lot faster, and once it became less tedious to write Natvis via Typevis I found myself doing more helpful things like add a format detection that only shows under slightly more complex conditions:
- 'Format' | dwBBitMask == 0x1f && dwGBitMask == 0x7e0 && dwRBitMask == 0xf800 : 'BGR565'
This expands to the following Natvis:
<Synthetic Name="Format" Condition="dwBBitMask == 0x1f && dwGBitMask == 0x7e0 && dwRBitMask == 0xf800 ">
<DisplayString>BGR565</DisplayString>
</Synthetic>
Not impossible to write by hand, but the quoting ampersands would have made it not cross my “why would I bother” threshold - the bang-per-buck calculation just changed.
Then more second order effects showed up - I was writing a visualizer for D3D12_RESOURCE_DESC
and I realized I can have the debug view validate the struct. Here’s the pertinent Typevis:
D3D12_RESOURCE_DESC
{
'D3D12 Tex2D Resource Description [INVALID]' | Dimension == 3 && DepthOrArraySize != 1
'D3D12 Tex2D Resource Description' | Dimension == 3
'D3D12 Resource Description'
{
- 'Dimensions': '{{ {Width}x{Height} }}'
- 'Format' | Format == 85: 'BGR565'
- 'Format' | Format != 85: Format
- 'Mip levels': '{MipLevels}'
- 'Alignment': '{Alignment} [{Alignment,x}]'
}
}
The name field shown in the debugger will match the statement whose condition is true, so if the Dimension is equal to 3 (D3D12_RESOURCE_DIMENSION_TEXTURE2D
) and DepthOrArraySize
isn’t 1, the debugger will show the struct’s friendly name as D3D12 Tex2D Resource Description [INVALID]
, otherwise it will show as D3D12 Tex2D Resource Description
. This is just an example, so many cases aren’t handled here, just a way to demonstrate the concept of validation logic in Typevis / Natvis. This generates the following Natvis:
<?xml version="1.0" encoding="utf-8"?>
<AutoVisualizer xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010">
<Type Name="D3D12_RESOURCE_DESC">
<DisplayString Condition="Dimension == 3 && DepthOrArraySize != 1">D3D12 Tex2D Resource Description [INVALID]</DisplayString>
<DisplayString Condition="Dimension == 3">D3D12 Tex2D Resource Description</DisplayString>
<DisplayString>D3D12 Resource Description</DisplayString>
<Expand>
<Synthetic Name="Dimensions">
<DisplayString>{{ {Width}x{Height} }}</DisplayString>
</Synthetic>
<Synthetic Name="Format" Condition="Format == 85">
<DisplayString>BGR565</DisplayString>
</Synthetic>
<Item Name="Format" Condition="Format != 85">Format</Item>
<Synthetic Name="Mip levels">
<DisplayString>{MipLevels}</DisplayString>
</Synthetic>
<Synthetic Name="Alignment">
<DisplayString>{Alignment} [{Alignment,x}]</DisplayString>
</Synthetic>
</Expand>
</Type>
</AutoVisualizer>
The Typevis parser was a quick, hacky tool that proved way more useful than I imagined. It’s incomplete and imperfect, and for a while I’d always run it in a debugger as crashes and infinite loops were common, and I only fixed what blocked me from using it, but I think others might find it useful. Word of warning though: you can’t always pipe Typevis output to a redirected file, as PowerShell for example adds a Byte Order Mark (BOM) to the generated XML file and the Visual Studio Natvis parser at the time of writing Typevis wouldn’t consume that gracefully. The tool has a command line option to specify output path.
Typevis is open source, and can be found on Code by Sherief