I've seen the examples that comes with the trial, but I don't quite understand them...
I'm looking to replace our existing rendering solution which is based on irrlicht lime. In irrlicht, I can subclass SceneNode and provide a Render event handler which gets called at render time.
From this handler, I can set the world and view matrices and make primitive draw calls such as render tristrip and pass in an array of vertices and indices. I use this on occasion to do things like drawing some geometry "over top" of other geometry by moving it closer to the camera a smidge at render time.
To provide you own rendering logic you can use CustomRenderableNode object - it is a special SceneNode object that is created with a callback action that is called when the SceneNode needs to be rendered. The Action is called with RenderingContext and other parameters and there you can set your own DirectX states and call Draw calls.
When setting states, constant buffers and shaders please use the renderingContext.ContextStatesManager object - this way your custom code will work fine with other DXEngine SceneNodes.
To see a sample CustomRenderableNode check the Customizations\CustomRenderingStep4.xaml.cs source in the DXEngine samples project (there are also some additional comments).
You can also provide your rendering logic by specifying your own CustomActionRenderingStep - rendering steps are defined in the DXScene.RenderingSteps collection and define the steps that are executed to render one frame.
By default the rendering steps are:
InitializeRenderingStep - InitializeRendering is the first rendering step. It sets up the RenderingContext with current RenderTargets, resets statistics, etc.
PrepareRenderTargetsRenderingStep - PrepareRenderTargets sets rendering targets and clears them and sets Viewport.
RenderObjectsRenderingStep - Default RenderObjects renders the objects with their default effect and material.
ResolveMultisampledBackBufferRenderingStep - Resolve multisampled back buffer (MSAABackBuffer) into the back buffer.
PreparePostProcessingRenderingStep - prepares the buffers for post-processing. When no post-processing effects are used, this and the next steps are not present in the RenderingSteps collection.
RenderPostProcessingRenderingStep - renders the post processing effects.
CompleteRenderingStep - CompleteRendering is the last rendering step. It Presents SwapChain (if used) or prepares the output buffer that can be sent to WPF or CPU memory.
You can add your own rendering step between already defined rendering steps.
This can be done with the following steps:
Define your own rendering step.
This can be done with derived a class from Ab3d.DirectX.RenderingStepBase and implement OnRun method. Another simpler option is to create a new instance of Ab3d.DirectX.CustomActionRenderingStep class and set its CustomAction delegate to your method that will execute your code (this is also used in this sample).
Insert your rendering step into the existing rendering steps.
This is done by calling AddAfter or AddBefore methods. Both these methods take an existing rendering step as the first parameter. To get existing rendering steps, you can use the properties defined in DXScene - the names of properties start with Default and then the name of rendering step class - for example:
Execute your code based on the RenderingContext. OnRun or CustomAction are called with RenderingContext that provide various properties about current rendering process - for example:
Current DirectX device, device context, render target, back buffer, viewport, frame number, etc.
Many other scene related properties can be get from the DXScene - for example, the current camera and lights.
More information about this process and sample code can be seen in samples from Customizations\CustomRenderingStep1 to Customizations\CustomRenderingStep3. It is recommended to check also other samples in the Customizations folder.
Ok, thanks for that. I did do some looking at the examples again after I posted and I think I understand it all.
The problem is that it seems very complex. Like all I need to do is set a custom world matrix for some geometry and draw it. But it seems to do this I need to have my own vertex shader and set the world matrix in the constant buffer, is that correct?
In irrlicht it was really nice because I could literally on the draw context object set the "world transform" and then just draw existing vertex/index buffers. No shaders were required. It was very simple.
The solution in ab3d seems quite complex. Is my understanding correct, that I need to set the custom world matrix in a constant buffer and then create a custom vertex shader that consumes this matrix to transform the vertices?
Uh, it would surely be an overkill to provide your own rendering logic for only custom transformation.
You can set the transformation to any SceneNode with settings its Transform - this way you can provide your own world transformation. This will also work well with other parts of the engine (for example with automatic calculation of near and far camera planes). Then the DXEngine will choose the correct effect and shader based on the specified material and do the drawing - it will also set the transformation matrix to the correct constant buffer.
Is this not enough for your use case or is there any other reason that you need custom rendering logic.
But I need to set the transform on a per-frame basis and ideally the geometry will be instanced.
I assumed changing the scene node transform on a per-frame basis would be expensive... There will be potentially 10k of these things in a scene, so the overhead of a scene node per instance I assumed would be overkill.
I can give it a try and see how it goes - where would be the best place to set the scene node transform just prior to rendering?
If you are rendering objects with the same geometry (mesh), then it is recommended to use object instancing - this gives you by far the best performance. Also, you can simply set the transformation matrix into the InstanceData array.
For the next version of DXEngine I have prepared a sample that shows animated instanced arrows - on my computer it can show 1 million (!!!) animated 3D arrows with almost 60 FPS (the most time is spent to calculate new world matrix for each arrow and each frame). The sample shows that with instancing it is possible to achieve incredible performance.
I am attaching the two files for the sample to this post. To test the sample, add the files to the DXEngine samples project into the DXEngine folder. The open the Samples.xml file on the root folder and add the following line (for example into line 14):
If you will not use instancing, you can still update Transform for each SceneNode before you call Render on DXScene (you said that you are not using DXViewportView and are calling Render manually). This should not be a serious performance problem. A bigger problem is to have 10k objects where each is defined by its own SceneNode - this requires 10k DirectX draw calls and this limit your performance because of required work on the CPU (again use DXEngineSnoop to see how much time is spent in DrawRenderTime).
I had seen the instancing functionality there, but hadn't really looked into it because it means I'll need to change the way I'm binding my data model to the visuals... Which will be a pain, but it sounds like performance-wise it'll be well worth the effort.
Regarding instancing - if I have a single DXMeshGeometry3D object that gets used by multiple MeshObjectNode objects, is that effectively instancing the underlying geometry? Or does the MeshObjectNode make a copy of the mesh data?
Ok great. How come this class doesn't show up in the class hierarchy in the docs? Makes it very difficult to find.
Also, why does it not have the same InstanceData property with both get and set?
And lastly, it seems if there is no instance data, it crashes when rendering... In certain cases I won't have any of these instances - what is the best way to stop the node from crashing the render loop?