Mitsuba 2 provides a mechanism to implement custom plugins directly in
Python. To do so, simply extend a base class (e.g. BSDF
, Emitter
) and
override its class methods (e.g. sample()
, eval()
, …). This leverages
the trampoline feature
of pybind11.
The new plugin can then be registered with the XML parser by calling one of
several register_<type>(name, constructor)
functions, making it accessible
in the XML scene language like any other C++ plugin.
Warning
Only gpu_*
variants will produce reasonable performance
when using such Python implementations of core system components (since
they can be JIT-compiled along with other system components). We plan to
extend this feature to further variant types in the future.
Furthermore, only BSDF
, Emitter
, and Integrator
plugins can
currently be implemented in Python.
The remainder of this section discusses several examples.
In this example, we will implement a simple diffuse BSDF in Python by extending
the BSDF
base class. The code is very similar to the diffuse BSDF
implemented in C++ (in src/bsdf/diffuse.cpp
).
The BSDF class needs to implement the following 3 methods: sample
,
eval
and pdf
:
class MyDiffuseBSDF(BSDF):
def __init__(self, props):
BSDF.__init__(self, props)
self.m_reflectance \
= load_string('''<spectrum version='2.0.0' type='srgb' name="reflectance">
<rgb name="color" value="0.45, 0.90, 0.90"/>
</spectrum>''')
self.m_flags = BSDFFlags.DiffuseReflection | BSDFFlags.FrontSide
self.m_components = [self.m_flags]
def sample(self, ctx, si, sample1, sample2, active):
cos_theta_i = Frame3f.cos_theta(si.wi)
active &= cos_theta_i > 0
bs = BSDFSample3f()
bs.wo = warp.square_to_cosine_hemisphere(sample2)
bs.pdf = warp.square_to_cosine_hemisphere_pdf(bs.wo)
bs.eta = 1.0
bs.sampled_type = +BSDFFlags.DiffuseReflection
bs.sampled_component = 0
value = self.m_reflectance.eval(si, active)
return ( bs, ek.select(active & (bs.pdf > 0.0), value, Vector3f(0)) )
def eval(self, ctx, si, wo, active):
if not ctx.is_enabled(BSDFFlags.DiffuseReflection):
return Vector3f(0)
cos_theta_i = Frame3f.cos_theta(si.wi)
cos_theta_o = Frame3f.cos_theta(wo)
value = self.m_reflectance.eval(si, active) * math.InvPi * cos_theta_o
return ek.select((cos_theta_i > 0.0) & (cos_theta_o > 0.0), value, Vector3f(0))
def pdf(self, ctx, si, wo, active):
if not ctx.is_enabled(BSDFFlags.DiffuseReflection):
return Vector3f(0)
cos_theta_i = Frame3f.cos_theta(si.wi)
cos_theta_o = Frame3f.cos_theta(wo)
pdf = warp.square_to_cosine_hemisphere_pdf(wo)
return ek.select((cos_theta_i > 0.0) & (cos_theta_o > 0.0), pdf, 0.0)
def to_string(self):
return "MyDiffuseBSDF[reflectance = %s]".format(self.m_reflectance.to_string())
After defining this new BSDF class, we have to register it as a plugin. This allows Mitsuba to instantiate this new BSDF when loading a scene:
register_bsdf("mydiffusebsdf", lambda props: MyDiffuseBSDF(props))
The register_bsdf
function takes the name of our new plugin and a function to construct new
instances. After that, we can use our new BSDF in a XML scene file by specifying
<bsdf type="mydiffusebsdf"/>
The scene can then rendered by calling the standard render
function:
# Load an XML file which specifies "mydiffusebsdf" as material
filename = 'path/to/my/scene.xml'
Thread.thread().file_resolver().append(os.path.dirname(filename))
scene = load_file(filename)
scene.integrator().render(scene, scene.sensors()[0])
film = scene.sensors()[0].film()
film.set_destination_file('my-diffuse-bsdf.exr')
film.develop()
Note
The code for this example can be found in docs/examples/04_diffuse_bsdf/diffuse_bsdf.py
In this example, we will implement custom direct illumination integrator using the same mechanism. The resulting image will have realistic shadows and shading, but no global illumination.
The main rendering routing can be implemented in around 30 lines of code:
def integrator_sample(scene, sampler, rays, medium, active=True):
si = scene.ray_intersect(rays)
active = si.is_valid() & active
# Visible emitters
emitter_vis = si.emitter(scene, active)
result = ek.select(active, Emitter.eval_vec(emitter_vis, si, active), Vector3f(0.0))
ctx = BSDFContext()
bsdf = si.bsdf(rays)
# Emitter sampling
sample_emitter = active & has_flag(BSDF.flags_vec(bsdf), BSDFFlags.Smooth)
ds, emitter_val = scene.sample_emitter_direction(si, sampler.next_2d(sample_emitter), True, sample_emitter)
active_e = sample_emitter & ek.neq(ds.pdf, 0.0)
wo = si.to_local(ds.d)
bsdf_val = BSDF.eval_vec(bsdf, ctx, si, wo, active_e)
bsdf_pdf = BSDF.pdf_vec(bsdf, ctx, si, wo, active_e)
mis = ek.select(ds.delta, Float(1), mis_weight(ds.pdf, bsdf_pdf))
result += ek.select(active_e, emitter_val * bsdf_val * mis, Vector3f(0))
# BSDF sampling
active_b = active
bs, bsdf_val = BSDF.sample_vec(bsdf, ctx, si, sampler.next_1d(active), sampler.next_2d(active), active_b)
si_bsdf = scene.ray_intersect(si.spawn_ray(si.to_world(bs.wo)), active_b)
emitter = si_bsdf.emitter(scene, active_b)
active_b &= ek.neq(emitter, 0)
emitter_val = Emitter.eval_vec(emitter, si_bsdf, active_b)
delta = has_flag(bs.sampled_type, BSDFFlags.Delta)
ds = DirectionSample3f(si_bsdf, si)
ds.object = emitter
emitter_pdf = ek.select(delta, Float(0), scene.pdf_emitter_direction(si, ds, active_b))
result += ek.select(active_b, bsdf_val * emitter_val * mis_weight(bs.pdf, emitter_pdf), Vector3f(0))
return result, si.is_valid(), ek.select(si.is_valid(), si.t, Float(0.0))
The code is very similar to the direct illumination integrator implemented in C++
(in src/integrators/direct.cpp
).
The function takes the current scene, sampler and array of rays as arguments.
The active
argument specifies which lanes are active.
We first intersect the provided rays against the scene and evaluate the radiance of directly visible emitters. Then, we explicitly sample positions on emitters and evaluate their contributions. Additionally, we sample a ray direction according to the BSDF and evaluate whether these rays also hit some emitters. These different contributions are then combined using multiple importance sampling to reduce variance.
This function will be invoked for an array of different rays, hence each ray
can then potentially hit a surface with a different BSDF. Therefore,
bsdf = si.bsdf(rays)
will be an array of different BSDF pointers. To
then call member functions of these different BSDFs, we invoke special dispatch
functions for vectorized method calls:
bsdf_val = BSDF.eval_vec(bsdf, ctx, si, wo, active_e)
bsdf_pdf = BSDF.pdf_vec(bsdf, ctx, si, wo, active_e)
This will ensure that the C++ or Python implementation of the right BSDF model is invoked for each variant. Other than that, the code and used interfaces are nearly identical to the C++ version. Please refer to the documentation of the C++ types for details on the different functions and objects.
When implementing the depth integrator in the section on
custom rendering pipelines, considerable work was
necessary to correctly sample rays from the camera and splat samples into the film.
While this can be very useful for certain applications, it is also a bit tedious.
In many cases, we simply want to implement a custom integrator and not bother with how camera rays are exactly generated.
In this example, we will therefore use a more elegant mechanism, which allows to simply extend the SamplingIntegrator
base class.
This class is the base class of all Monte Carlo integrators, e.g. the path tracer.
By using this class to define our integrator, we can then rely on the existing machinery in Mitsuba to correctly sample camera rays and splat pixel values.
Since we already defined a function integrator_sample
with the right interface, defining a new integrator becomes very simple:
class MyDirectIntegrator(SamplingIntegrator):
def __init__(self, props):
SamplingIntegrator.__init__(self, props)
def sample(self, scene, sampler, ray, medium, active):
result, is_valid, depth = integrator_sample(scene, sampler, ray, medium, active)
return result, is_valid, [depth]
def aov_names(self):
return ["depth.Y"]
def to_string(self):
return "MyDirectIntegrator[]"
This integrator not only returns the color, but also additionally renders out the depth of the scene.
When using this integrator, Mitsuba will output a multichannel EXR file with a separate depth channel.
The method aov_names
is used to name the different arbitrary output variables (AOVs).
The same mechanism could be used to output for example surface normals.
If no AOVs are needed, you can just return an empty list instead.
After defining this new integrator class, we have to register it as a plugin. This allows Mitsuba to instantiate this new integrator when loading a scene:
register_integrator("mydirectintegrator", lambda props: MyDirectIntegrator(props))
The register_integrator
function takes the name of our new plugin and a function to construct new instances.
After that, we can use our new integrator in a scene by specifying
<integrator type="mydirectintegrator"/>
The scene is then rendered by calling the standard render
function:
# Load an XML file which specifies "mydirectintegrator" as the scene's integrator
filename = 'path/to/my/scene.xml'
Thread.thread().file_resolver().append(os.path.dirname(filename))
scene = load_file(filename)
scene.integrator().render(scene, scene.sensors()[0])
film = scene.sensors()[0].film()
film.set_destination_file('my-direct-integrator.exr')
film.develop()
Note
The code for this example can be found in docs/examples/03_direct_integrator/direct_integrator.py