Base class: mitsuba.core.Object
Bidirectional Scattering Distribution Function (BSDF) interface
This class provides an abstract interface to all %BSDF plugins in Mitsuba. It exposes functions for evaluating and sampling the model, and for querying associated probability densities.
By default, functions in class sample and evaluate the complete BSDF, but it also allows to pick and choose individual components of multi- lobed BSDFs based on their properties and component indices. This selection is specified using a context data structure that is provided along with every operation.
When polarization is enabled, BSDF sampling and evaluation returns 4x4
Mueller matrices that describe how scattering changes the polarization
state of incident light. Mueller matrices (e.g. for mirrors) are
expressed with respect to a reference coordinate system for the
incident and outgoing direction. The convention used here is that
these coordinate systems are given by coordinate_system(wi)
and
coordinate_system(wo)
, where ‘wi’ and ‘wo’ are the incident and
outgoing direction in local coordinates.
props
(mitsuba.core.Properties
):no description available
Number of components this BSDF is comprised of.
active
(bool):Mask to specify active lanes.
no description available
Evaluate the BSDF f(wi, wo) or its adjoint version f^{*}(wi, wo) and multiply by the cosine foreshortening term.
Based on the information in the supplied query context ctx
, this
method will either evaluate the entire BSDF or query individual
components (e.g. the diffuse lobe). Only smooth (i.e. non Dirac-delta)
components are supported: calling eval()
on a perfectly specular
material will return zero.
Note that the incident direction does not need to be explicitly
specified. It is obtained from the field si.wi
.
ctx
(mitsuba.render.BSDFContext
):A context data structure describing which lobes to evalute, and whether radiance or importance are being transported.
si
(mitsuba.render.SurfaceInteraction3f
):A surface interaction data structure describing the underlying
surface position. The incident direction is obtained from the
field si.wi
.
wo
(enoki.scalar.Vector3f):The outgoing direction
active
(bool):Mask to specify active lanes.
no description available
Evaluate un-scattered transmission component of the BSDF
This method will evaluate the un-scattered transmission
(BSDFFlags::Null) of the BSDF for light arriving from direction w
.
The default implementation returns zero.
si
(mitsuba.render.SurfaceInteraction3f
):A surface interaction data structure describing the underlying
surface position. The incident direction is obtained from the
field si.wi
.
active
(bool):Mask to specify active lanes.
no description available
Flags for all components combined.
active
(bool):Mask to specify active lanes.
no description available
Flags for a specific component of this BSDF.
index
(int):no description available
active
(bool):Mask to specify active lanes.
no description available
Return a string identifier
no description available
Does the implementation require access to texture-space differentials?
active
(bool):Mask to specify active lanes.
no description available
Compute the probability per unit solid angle of sampling a given direction
This method provides access to the probability density that would result when supplying the same BSDF context and surface interaction data structures to the sample() method. It correctly handles changes in probability when only a subset of the components is chosen for sampling (this can be done using the BSDFContext::component and BSDFContext::type_mask fields).
Note that the incident direction does not need to be explicitly
specified. It is obtained from the field si.wi
.
ctx
(mitsuba.render.BSDFContext
):A context data structure describing which lobes to evalute, and whether radiance or importance are being transported.
si
(mitsuba.render.SurfaceInteraction3f
):A surface interaction data structure describing the underlying
surface position. The incident direction is obtained from the
field si.wi
.
wo
(enoki.scalar.Vector3f):The outgoing direction
active
(bool):Mask to specify active lanes.
no description available
Importance sample the BSDF model
The function returns a sample data structure along with the importance weight, which is the value of the BSDF divided by the probability density, and multiplied by the cosine foreshortening factor (if needed — it is omitted for degenerate BSDFs like smooth mirrors/dielectrics).
If the supplied context data strutcures selects subset of components in a multi-lobe BRDF model, the sampling is restricted to this subset. Depending on the provided transport type, either the BSDF or its adjoint version is sampled.
When sampling a continuous/non-delta component, this method also multiplies by the cosine foreshorening factor with respect to the sampled direction.
ctx
(mitsuba.render.BSDFContext
):A context data structure describing which lobes to sample, and whether radiance or importance are being transported.
si
(mitsuba.render.SurfaceInteraction3f
):A surface interaction data structure describing the underlying
surface position. The incident direction is obtained from the
field si.wi
.
sample1
(float):A uniformly distributed sample on $[0,1]$. It is used to select the BSDF lobe in multi-lobe models.
sample2
(enoki.scalar.Vector2f):A uniformly distributed sample on $[0,1]^2$. It is used to generate the sampled direction.
active
(bool):Mask to specify active lanes.
mitsuba.render.BSDFSample3f
, enoki.scalar.Vector3f]:A pair (bs, value) consisting of
bs: Sampling record, indicating the sampled direction, PDF values and other information. The contents are undefined if sampling failed.
value: The BSDF value (multiplied by the cosine foreshortening factor when a non-delta component is sampled). A zero spectrum indicates that sampling failed.
Context data structure for BSDF evaluation and sampling
BSDF models in Mitsuba can be queried and sampled using a variety of different modes – for instance, a rendering algorithm can indicate whether radiance or importance is being transported, and it can also restrict evaluation and sampling to a subset of lobes in a a multi- lobe BSDF model.
The BSDFContext data structure encodes these preferences and is supplied to most BSDF methods.
//! @}
mode
(mitsuba.render.TransportMode
):no description available
mode
(mitsuba.render.TransportMode
):no description available
type_mak
(int):no description available
component
(int):no description available
Integer value of requested BSDF component index to be sampled/evaluated.
Checks whether a given BSDF component type and BSDF component index are enabled in this context.
type
(mitsuba.render.BSDFFlags
):no description available
component
(int):no description available
no description available
Transported mode (radiance or importance)
Reverse the direction of light transport in the record
This updates the transport mode (radiance to importance and vice versa).
no description available
This list of flags is used to classify the different types of lobes that are implemented in a BSDF instance.
They are also useful for picking out individual components, e.g., by setting combinations in BSDFContext::type_mask.
Members:
No flags set (default value)
‘null’ scattering event, i.e. particles do not undergo deflection
Ideally diffuse reflection
Ideally diffuse transmission
Glossy reflection
Glossy transmission
Reflection into a discrete set of directions
Transmission into a discrete set of directions
The lobe is not invariant to rotation around the normal
The BSDF depends on the UV coordinates
Flags non-symmetry (e.g. transmission in dielectric materials)
Supports interactions on the front-facing side
Supports interactions on the back-facing side
Any reflection component (scattering into discrete, 1D, or 2D set of directions)
Any transmission component (scattering into discrete, 1D, or 2D set of directions)
Diffuse scattering into a 2D set of directions
Non-diffuse scattering into a 2D set of directions
Scattering into a 2D set of directions
Scattering into a discrete set of directions
Scattering into a 1D space of directions
Any kind of scattering
arg0
(int):no description available
Data structure holding the result of BSDF sampling operations.
Given a surface interaction and an incident/exitant direction pair (wi, wo), create a query record to evaluate the BSDF or its sampling density.
By default, all components will be sampled regardless of what measure they live on.
wo
(enoki.scalar.Vector3f):An outgoing direction in local coordinates. This should be a normalized direction vector that points away from the scattering event.
Copy constructor
bs
(mitsuba.render.BSDFSample3f
):no description available
Relative index of refraction in the sampled direction
Probability density at the sample
Stores the component index that was sampled by BSDF::sample()
Stores the component type that was sampled by BSDF::sample()
Normalized outgoing direction in local coordinates
Specifies the transport mode when sampling or evaluating a scattering function
Members:
Radiance transport
Importance transport
arg0
(int):no description available
Implementation of the Beckman and GGX / Trowbridge-Reitz microfacet distributions and various useful sampling routines
Based on the papers
“Microfacet Models for Refraction through Rough Surfaces” by Bruce Walter, Stephen R. Marschner, Hongsong Li, and Kenneth E. Torrance
and
“Importance Sampling Microfacet-Based BSDFs using the Distribution of Visible Normals” by Eric Heitz and Eugene D’Eon
The visible normal sampling code was provided by Eric Heitz and Eugene D’Eon. An improvement of the Beckmann model sampling routine is discussed in
“An Improved Visible Normal Sampling Routine for the Beckmann Distribution” by Wenzel Jakob
An improvement of the GGX model sampling routine is discussed in “A Simpler and Exact Sampling Routine for the GGX Distribution of Visible Normals” by Eric Heitz
type
(mitsuba.render.MicrofacetType
):no description available
alpha
(float):no description available
sample_visible
(bool):no description available
type
(mitsuba.render.MicrofacetType
):no description available
alpha_u
(float):no description available
alpha_v
(float):no description available
sample_visible
(bool):no description available
type
(mitsuba.render.MicrofacetType
):no description available
alpha
(float):no description available
sample_visible
(bool):no description available
type
(mitsuba.render.MicrofacetType
):no description available
alpha_u
(float):no description available
alpha_v
(float):no description available
sample_visible
(bool):no description available
arg0
(mitsuba.core.Properties
):no description available
Smith’s separable shadowing-masking approximation
wi
(enoki.scalar.Vector3f):no description available
wo
(enoki.scalar.Vector3f):no description available
m
(enoki.scalar.Vector3f):no description available
no description available
Return the roughness (isotropic case)
no description available
Return the roughness along the tangent direction
no description available
Return the roughness along the bitangent direction
no description available
Evaluate the microfacet distribution function
m
(enoki.scalar.Vector3f):The microfacet normal
no description available
Is this an anisotropic microfacet distribution?
no description available
Is this an isotropic microfacet distribution?
no description available
Returns the density function associated with the sample() function.
wi
(enoki.scalar.Vector3f):The incident direction (only relevant if visible normal sampling is used)
m
(enoki.scalar.Vector3f):The microfacet normal
no description available
Draw a sample from the microfacet normal distribution and return the associated probability density
sample
(enoki.scalar.Vector2f):A uniformly distributed 2D sample
pdf
:The probability density wrt. solid angles
wi
(enoki.scalar.Vector3f):no description available
no description available
Return whether or not only visible normals are sampled?
no description available
Visible normal sampling code for the alpha=1 case
cos_theta_i
(float):no description available
sample
(enoki.scalar.Vector2f):no description available
no description available
Scale the roughness values by some constant
value
(float):no description available
no description available
Smith’s shadowing-masking function for a single direction
v
(enoki.scalar.Vector3f):An arbitrary direction
m
(enoki.scalar.Vector3f):The microfacet normal
no description available
Return the distribution type
mitsuba.render.MicrofacetType
:no description available
Supported normal distribution functions
Members:
Beckmann distribution derived from Gaussian random surfaces
GGX: Long-tailed distribution for very rough surfaces (aka. Trowbridge-Reitz distr.)
arg0
(int):no description available
Base class: mitsuba.core.Object
Endpoint: an abstract interface to light sources and sensors
This class implements an abstract interface to all sensors and light sources emitting radiance and importance, respectively. Subclasses implement functions to evaluate and sample the profile, and to compute probability densities associated with the provided sampling techniques.
The name endpoint refers to the property that while a light path may involve any number of scattering events, it always starts and ends with emission and a measurement, respectively.
In addition to Endpoint::sample_ray, which generates a sample from the profile, subclasses also provide a specialized direction sampling method. This is a generalization of direct illumination techniques to both emitters and sensors. A direction sampling method is given an arbitrary reference position in the scene and samples a direction from the reference point towards the endpoint (ideally proportional to the emission/sensitivity profile). This reduces the sampling domain from 4D to 2D, which often enables the construction of smarter specialized sampling techniques.
When rendering scenes involving participating media, it is important
to know what medium surrounds the sensors and light sources. For this
reason, every endpoint instance keeps a reference to a medium (which
may be set to nullptr
when it is surrounded by vacuum).
Return an axis-aligned box bounding the spatial extents of the emitter
mitsuba.core.BoundingBox3f
:no description available
Given a ray-surface intersection, return the emitted radiance or importance traveling along the reverse direction
This function is e.g. used when an area light source has been hit by a ray in a path tracing-style integrator, and it subsequently needs to be queried for the emitted radiance along the negative ray direction. The default implementation throws an exception, which states that the method is not implemented.
si
(mitsuba.render.SurfaceInteraction3f
):An intersect record that specfies both the query position and
direction (using the si.wi
field)
active
(bool):Mask to specify active lanes.
The emitted radiance or importance
Return a pointer to the medium that surrounds the emitter
mitsuba.render.Medium
:no description available
Does the method sample_ray() require a uniformly distributed 2D sample
for the sample2
parameter?
no description available
Does the method sample_ray() require a uniformly distributed 2D sample
for the sample3
parameter?
no description available
Evaluate the probability density of the direct sampling method implemented by the sample_direction() method.
ds
(mitsuba.render.DirectionSample3f
):A direct sampling record, which specifies the query location.
it
(mitsuba.render.Interaction3f
):no description available
active
(bool):Mask to specify active lanes.
no description available
Given a reference point in the scene, sample a direction from the reference point towards the endpoint (ideally proportional to the emission/sensitivity profile)
This operation is a generalization of direct illumination techniques to both emitters and sensors. A direction sampling method is given an arbitrary reference position in the scene and samples a direction from the reference point towards the endpoint (ideally proportional to the emission/sensitivity profile). This reduces the sampling domain from 4D to 2D, which often enables the construction of smarter specialized sampling techniques.
Ideally, the implementation should importance sample the product of the emission profile and the geometry term between the reference point and the position on the endpoint.
The default implementation throws an exception.
ref
:A reference position somewhere within the scene.
sample
(enoki.scalar.Vector2f):A uniformly distributed 2D point on the domain [0,1]^2
it
(mitsuba.render.Interaction3f
):no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.DirectionSample3f
, enoki.scalar.Vector3f]:A DirectionSample instance describing the generated sample along with a spectral importance weight.
Importance sample a ray proportional to the endpoint’s sensitivity/emission profile.
The endpoint profile is a six-dimensional quantity that depends on time, wavelength, surface position, and direction. This function takes a given time value and five uniformly distributed samples on the interval [0, 1] and warps them so that the returned ray follows the profile. Any discrepancies between ideal and actual sampled profile are absorbed into a spectral importance weight that is returned along with the ray.
time
(float):The scene time associated with the ray to be sampled
sample1
(float):A uniformly distributed 1D value that is used to sample the spectral dimension of the emission profile.
sample2
(enoki.scalar.Vector2f):A uniformly distributed sample on the domain [0,1]^2
. For
sensor endpoints, this argument corresponds to the sample position
in fractional pixel coordinates relative to the crop window of the
underlying film. This argument is ignored if needs_sample_2() ==
false
.
sample3
(enoki.scalar.Vector2f):A uniformly distributed sample on the domain [0,1]^2
. For
sensor endpoints, this argument determines the position on the
aperture of the sensor. This argument is ignored if
needs_sample_3() == false
.
active
(bool):Mask to specify active lanes.
mitsuba.core.Ray3f
, enoki.scalar.Vector3f]:The sampled ray and (potentially spectrally varying) importance weights. The latter account for the difference between the profile and the actual used sampling density function.
Set the medium that surrounds the emitter.
medium
(mitsuba.render.Medium
):no description available
no description available
Set the shape associated with this endpoint.
shape
(mitsuba.render.Shape
):no description available
no description available
Return the shape, to which the emitter is currently attached
mitsuba.render.Shape
:no description available
Return the local space to world space transformation
mitsuba.core.AnimatedTransform
:no description available
Base class: mitsuba.render.Endpoint
Flags for all components combined.
arg0
(bool):no description available
no description available
Is this an environment map light emitter?
no description available
This list of flags is used to classify the different types of emitters.
Members:
No flags set (default value)
The emitter lies at a single point in space
The emitter emits light in a single direction
The emitter is placed at infinity (e.g. environment maps)
The emitter is attached to a surface (e.g. area emitters)
The emission depends on the UV coordinates
Delta function in either position or direction
arg0
(int):no description available
Base class: mitsuba.render.Endpoint
Return the Film instance associated with this sensor
mitsuba.render.Film
:no description available
Does the sampling technique require a sample for the aperture position?
no description available
time
(float):no description available
sample1
(float):no description available
sample2
(enoki.scalar.Vector2f):no description available
sample3
(enoki.scalar.Vector2f):no description available
active
(bool):Mask to specify active lanes.
mitsuba.core.RayDifferential3f
, enoki.scalar.Vector3f]:no description available
Return the sensor’s sample generator
This is the root sampler, which will later be cloned a number of times to provide each participating worker thread with its own instance (see Scene::sampler()). Therefore, this sampler should never be used for anything except creating clones.
mitsuba.render.Sampler
:no description available
Return the time value of the shutter opening event
no description available
Return the length, for which the shutter remains open
no description available
Base class: mitsuba.render.Sensor
Projective camera interface
This class provides an abstract interface to several types of sensors that are commonly used in computer graphics, such as perspective and orthographic camera models.
The interface is meant to be implemented by any kind of sensor, whose world to clip space transformation can be explained using only linear operations on homogeneous coordinates.
A useful feature of ProjectiveCamera sensors is that their view can be rendered using the traditional OpenGL pipeline.
Return the far clip plane distance
no description available
Return the distance to the focal plane
no description available
Return the near clip plane distance
no description available
Base class: mitsuba.core.Object
mi
(mitsuba.render.MediumInteraction3f
):no description available
si
(mitsuba.render.SurfaceInteraction3f
):no description available
active
(bool):Mask to specify active lanes.
no description available
mi
(mitsuba.render.MediumInteraction3f
):no description available
active
(bool):Mask to specify active lanes.
no description available
mi
(mitsuba.render.MediumInteraction3f
):no description available
active
(bool):Mask to specify active lanes.
no description available
Return a string identifier
no description available
ray
(mitsuba.core.Ray3f
):no description available
no description available
Return the phase function of this medium
mitsuba.render.PhaseFunction
:no description available
ray
(mitsuba.core.Ray3f
):no description available
sample
(float):no description available
channel
(int):no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.MediumInteraction3f
:no description available
Returns whether this specific medium instance uses emitter sampling
no description available
Base class: mitsuba.core.Object
Evaluates the phase function model
The function returns the value (which equals the PDF) of the phase function in the query direction.
ctx
(mitsuba.render.PhaseFunctionContext
):A phase function sampling context, contains information about the transport mode
mi
(mitsuba.render.MediumInteraction3f
):A medium interaction data structure describing the underlying
medium position. The incident direction is obtained from the field
mi.wi
.
wo
(enoki.scalar.Vector3f):An outgoing direction to evaluate.
active
(bool):Mask to specify active lanes.
The value of the phase function in direction wo
Return a string identifier
no description available
Importance sample the phase function model
The function returns a sampled direction.
ctx
(mitsuba.render.PhaseFunctionContext
):A phase function sampling context, contains information about the transport mode
mi
(mitsuba.render.MediumInteraction3f
):A medium interaction data structure describing the underlying
medium position. The incident direction is obtained from the field
mi.wi
.
sample
:A uniformly distributed sample on $[0,1]^2$. It is used to generate the sampled direction.
sample1
(enoki.scalar.Vector2f):no description available
active
(bool):Mask to specify active lanes.
A sampled direction wo
//! @}
Reverse the direction of light transport in the record
This updates the transport mode (radiance to importance and vice versa).
no description available
Sampler object
This enumeration is used to classify phase functions into different types, i.e. into isotropic, anisotropic and microflake phase functions.
This can be used to optimize implementatons to for example have less overhead if the phase function is not a microflake phase function.
Members:
arg0
(int):no description available
Base class: mitsuba.core.Object
Base class of all geometric shapes in Mitsuba
This class provides core functionality for sampling positions on surfaces, computing ray intersections, and bounding shapes within ray intersection acceleration data structures.
Return an axis aligned box that bounds all shape primitives (including any transformations that may have been applied to them)
mitsuba.core.BoundingBox3f
:no description available
Return an axis aligned box that bounds a single shape primitive (including any transformations that may have been applied to it)
The default implementation simply calls bbox()
index
(int):no description available
mitsuba.core.BoundingBox3f
:no description available
Return an axis aligned box that bounds a single shape primitive after it has been clipped to another bounding box.
This is extremely important to construct high-quality kd-trees. The default implementation just takes the bounding box returned by bbox(ScalarIndex index) and clips it to clip.
index
(int):no description available
clip
(mitsuba.core.BoundingBox3f
):no description available
mitsuba.core.BoundingBox3f
:no description available
mitsuba.render.BSDF
:no description available
ray
(mitsuba.core.Ray3f
):no description available
pi
(mitsuba.render.PreliminaryIntersection3f
):no description available
flags
(mitsuba.render.HitComputeFlags
):no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.SurfaceInteraction3f
:no description available
Return the number of primitives (triangles, hairs, ..) contributed to the scene by this shape
Includes instanced geometry. The default implementation simply returns the same value as primitive_count().
no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.Emitter
:no description available
Return the medium that lies on the exterior of this shape
mitsuba.render.Medium
:no description available
Return a string identifier
no description available
Return the medium that lies on the interior of this shape
mitsuba.render.Medium
:no description available
Is this shape also an area emitter?
no description available
Does the surface of this shape mark a medium transition?
no description available
Is this shape a triangle mesh?
no description available
Is this shape also an area sensor?
no description available
Return whether shape’s parameters require gradients (default implementation return false)
no description available
Query the probability density of sample_direction()
it
(mitsuba.render.Interaction3f
):A reference position somewhere within the scene.
ps
(mitsuba.render.DirectionSample3f
):A position record describing the sample in question
active
(bool):Mask to specify active lanes.
The probability density per unit solid angle
Query the probability density of sample_position() for a particular point on the surface.
ps
(mitsuba.render.PositionSample3f
):A position record describing the sample in question
active
(bool):Mask to specify active lanes.
The probability density per unit area
Returns the number of sub-primitives that make up this shape
The default implementation simply returns 1
no description available
Test for an intersection and return detailed information
This operation combines the prior ray_intersect_preliminary() and compute_surface_interaction() operations.
ray
(mitsuba.core.Ray3f
):The ray to be tested for an intersection
flags
(mitsuba.render.HitComputeFlags
):Describe how the detailed information should be computed
active
(bool):Mask to specify active lanes.
mitsuba.render.SurfaceInteraction3f
:no description available
Fast ray intersection test
Efficiently test whether the shape is intersected by the given ray, and cache preliminary information about the intersection if that is the case.
If the intersection is deemed relevant (e.g. the closest to the ray origin), detailed intersection information can later be obtained via the create_surface_interaction() method.
ray
(mitsuba.core.Ray3f
):The ray to be tested for an intersection
cache
:Temporary space ((MTS_KD_INTERSECTION_CACHE_SIZE-2) *
sizeof(Float[P])
bytes) that must be supplied to cache
information about the intersection.
active
(bool):Mask to specify active lanes.
mitsuba.render.PreliminaryIntersection3f
:no description available
ray
(mitsuba.core.Ray3f
):no description available
active
(bool):Mask to specify active lanes.
no description available
Sample a direction towards this shape with respect to solid angles measured at a reference position within the scene
An ideal implementation of this interface would achieve a uniform
solid angle density within the surface region that is visible from the
reference position it.p
(though such an ideal implementation is
usually neither feasible nor advisable due to poor efficiency).
The function returns the sampled position and the inverse probability per unit solid angle associated with the sample.
When the Shape subclass does not supply a custom implementation of this function, the Shape class reverts to a fallback approach that piggybacks on sample_position(). This will generally lead to a suboptimal sample placement and higher variance in Monte Carlo estimators using the samples.
it
(mitsuba.render.Interaction3f
):A reference position somewhere within the scene.
sample
(enoki.scalar.Vector2f):A uniformly distributed 2D point on the domain [0,1]^2
active
(bool):Mask to specify active lanes.
mitsuba.render.DirectionSample3f
:A DirectionSample instance describing the generated sample
Sample a point on the surface of this shape
The sampling strategy is ideally uniform over the surface, though implementations are allowed to deviate from a perfectly uniform distribution as long as this is reflected in the returned probability density.
time
(float):The scene time associated with the position sample
sample
(enoki.scalar.Vector2f):A uniformly distributed 2D point on the domain [0,1]^2
active
(bool):Mask to specify active lanes.
mitsuba.render.PositionSample3f
:A PositionSample instance describing the generated sample
mitsuba.render.Sensor
:no description available
Return the shape’s surface area.
The function assumes that the object is not undergoing some kind of time-dependent scaling.
The default implementation throws an exception.
no description available
Base class: mitsuba.render.Shape
mitsuba.render.Mesh
, name: str, vertex_count: int, face_count: int, props: mitsuba.core.Properties
= Properties[plugin_name = “”, id = “”, elements = { }
] , has_vertex_normals: bool = False, has_vertex_texcoords: bool = False) -> None
Create a new mesh with the given vertex and face data structures
Add an attribute buffer with the given name
and dim
name
(str):no description available
size
(int):no description available
buffer
(enoki.dynamic.Float32):no description available
no description available
Return the mesh attribute associated with name
name
(str):no description available
no description available
uv
(enoki.scalar.Vector2f):no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.SurfaceInteraction3f
:no description available
Return the total number of faces
no description available
Return face indices buffer
no description available
Does this mesh have per-vertex normals?
no description available
Does this mesh have per-vertex texture coordinates?
no description available
Ray-triangle intersection test
Uses the algorithm by Moeller and Trumbore discussed at
http://www.acm.org/jgt/papers/MollerTrumbore97/code.html
.
index
(int):Index of the triangle to be intersected.
ray
(mitsuba.core.Ray3f
):The ray segment to be used for the intersection query.
active
(bool):Mask to specify active lanes.
mitsuba.render.PreliminaryIntersection3f
:Returns an ordered tuple (mask, u, v, t)
, where mask
indicates whether an intersection was found, t
contains the
distance from the ray origin to the intersection point, and u
and v
contains the first two components of the intersection in
barycentric coordinates
Recompute the bounding box (e.g. after modifying the vertex positions)
no description available
Compute smooth vertex normals and replace the current normal values
no description available
Return the total number of vertices
no description available
Return vertex normals buffer
no description available
Return vertex positions buffer
no description available
Return vertex texcoords buffer
no description available
Export mesh as a binary PLY file
filename
(str):no description available
no description available
Base class: mitsuba.core.Object
Abstract film base class - used to store samples generated by Integrator implementations.
To avoid lock-related bottlenecks when rendering with many cores, rendering threads first store results in an “image block”, which is then committed to the film using the put() method.
Return a bitmap object storing the developed contents of the film
raw
(bool):no description available
mitsuba.core.Bitmap
:no description available
Return the offset of the crop window
no description available
Return the size of the crop window
no description available
Does the destination file already exist?
basename
(mitsuba.core.filesystem.path
):no description available
no description available
offset
(enoki.scalar.Vector2i):no description available
size
(enoki.scalar.Vector2i):no description available
target_offset
(enoki.scalar.Vector2i):no description available
target
(mitsuba.core.Bitmap
):no description available
no description available
Should regions slightly outside the image plane be sampled to improve the quality of the reconstruction at the edges? This only makes sense when reconstruction filters other than the box filter are used.
no description available
Configure the film for rendering a specified set of channels
channels
(List[str]):no description available
no description available
Merge an image block into the film. This methods should be thread- safe.
block
(mitsuba.render.ImageBlock
):no description available
no description available
Return the image reconstruction filter (const version)
mitsuba.core.ReconstructionFilter
:no description available
Set the size and offset of the crop window.
arg0
(enoki.scalar.Vector2i):no description available
arg1
(enoki.scalar.Vector2i):no description available
no description available
Set the target filename (with or without extension)
filename
(mitsuba.core.filesystem.path
):no description available
no description available
Ignoring the crop window, return the resolution of the underlying sensor
no description available
Base class: mitsuba.core.Object
Base class of all sample generators.
For each sample in a pixel, a sample generator produces a
(hypothetical) point in the infinite dimensional random number cube. A
rendering algorithm can then request subsequent 1D or 2D components of
this point using the next_1d
and next_2d
functions.
Scalar and wavefront rendering algorithms will need interact with the sampler interface in a slightly different way:
Scalar rendering algorithm:
1. Before beginning to render a pixel block, the rendering algorithm
calls seed
to initialize a new sequence with the specific seed
offset. 2. The first pixel sample can now be computed, after which
advance
needs to be invoked. This repeats until all pixel samples
have been generated. Note that some implementations need to be
configured for a certain number of pixel samples, and exceeding these
will lead to an exception being thrown. 3. While computing a pixel
sample, the rendering algorithm usually requests batches of (pseudo-)
random numbers using the next_1d
and next_2d
functions before
moving on to the next sample.
Wavefront rendering algorithm:
1. Before beginning to render the wavefront, the rendering algorithm
needs to inform the sampler of the amount of samples rendered in
parallel for every pixel in the wavefront. This can be achieved by
calling set_samples_per_wavefront
. 2. Then the rendering
algorithm should seed the sampler and set the appropriate wavefront
size by calling seed
. A different seed value, based on the
base_seed
and the seed offset, will be used for every sample (of
every pixel) in the wavefront. 3. advance
can be used to advance
to the next sample in the sequence. 4. As in the scalar approach, the
rendering algorithm can request batches of (pseudo-) random numbers
using the next_1d
and next_2d
functions.
Advance to the next sample.
A subsequent call to next_1d
or next_2d
will access the first
1D or 2D components of this sample.
no description available
Create a clone of this sampler
The clone is allowed to be different to some extent, e.g. a pseudorandom generator should be based on a different random seed compared to the original. All other parameters are copied exactly.
May throw an exception if not supported. Cloning may also change the state of the original sampler (e.g. by using the next 1D sample as a seed for the clone).
mitsuba.render.Sampler
:no description available
Retrieve the next component value from the current sample
active
(bool):Mask to specify active lanes.
no description available
Retrieve the next two component values from the current sample
active
(bool):Mask to specify active lanes.
no description available
Return the number of samples per pixel
no description available
Deterministically seed the underlying RNG, if applicable.
In the context of wavefront ray tracing & dynamic arrays, this
function must be called with wavefront_size
matching the size of
the wavefront.
seed_offset
(int):no description available
wavefront_size
(int):no description available
no description available
Set the number of samples per pass in wavefront modes (default is 1)
samples_per_wavefront
(int):no description available
no description available
Return the size of the wavefront (or 0, if not seeded)
no description available
Base class: mitsuba.core.Object
Return a bounding box surrounding the scene
mitsuba.core.BoundingBox3f
:no description available
Return the list of emitters
mitsuba.render.Emitter
]:no description available
Return the environment emitter (if any)
mitsuba.render.Emitter
:no description available
Return the scene’s integrator
no description available
ref
(mitsuba.render.Interaction
):no description available
active
(bool):Mask to specify active lanes.
no description available
Intersect a ray against all primitives stored in the scene and return information about the resulting surface interaction
ray
(mitsuba.core.Ray3f
):A 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which matters when the shapes are in motion)
mitsuba.render.SurfaceInteraction
:A detailed surface interaction record. Query its is_valid()
method to determine whether an intersection was actually found.
active
(bool):Mask to specify active lanes.
Intersect a ray against all primitives stored in the scene and return information about the resulting surface interaction
ray
(mitsuba.core.Ray3f
):A 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which matters when the shapes are in motion)
mitsuba.render.SurfaceInteraction
:A detailed surface interaction record. Query its is_valid()
method to determine whether an intersection was actually found.
flags
(mitsuba.render.HitComputeFlags
):no description available
active
(bool):Mask to specify active lanes.
ray
(mitsuba.core.Ray3f
):no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.SurfaceInteraction
:no description available
ray
(mitsuba.core.Ray3f
):no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.PreliminaryIntersection
:no description available
ray
(mitsuba.core.Ray3f
):no description available
active
(bool):Mask to specify active lanes.
no description available
ref
(mitsuba.render.Interaction
):no description available
sample
(enoki.scalar.Vector2f):no description available
test_visibility
(bool):no description available
mask
(bool):no description available
mitsuba.render.DirectionSample
, enoki.scalar.Vector3f]:no description available
Return the list of sensors
mitsuba.render.Sensor
]:no description available
no description available
Return whether any of the shape’s parameters require gradient
no description available
Base class: mitsuba.core.Object
Create an empty kd-tree and take build-related parameters from
props
.
Register a new shape with the kd-tree (to be called before build())
arg0
(mitsuba.render.Shape
):no description available
no description available
mitsuba.core.BoundingBox3f
:no description available
Return the number of registered primitives
no description available
Return the i-th shape (const version)
arg0
(int):no description available
mitsuba.render.Shape
:no description available
Return the number of registered shapes
no description available
Generic sampling record for positions
This sampling record is used to implement techniques that draw a position from a point, line, surface, or volume domain in 3D and furthermore provide auxilary information about the sample.
Apart from returning the position and (optionally) the surface normal, the responsible sampling method must annotate the record with the associated probability density and delta.
Construct an unitialized position sample
Copy constructor
other
(mitsuba.render.PositionSample3f
):no description available
Create a position sampling record from a surface intersection
This is useful to determine the hypothetical sampling density on a surface after hitting it using standard ray tracing. This happens for instance in path tracing with multiple importance sampling.
si
(mitsuba.render.SurfaceInteraction3f
):no description available
Set if the sample was drawn from a degenerate (Dirac delta) distribution
Note: we use an array of booleans instead of a mask, so that slicing a dynamic array of PositionSample remains possible even on architectures where scalar_t<Mask> != bool (e.g. Knights Landing).
Sampled surface normal (if applicable)
Optional: pointer to an associated object
In some uses of this record, sampling a position also involves
choosing one of several objects (shapes, emitters, ..) on which the
position lies. In that case, the object
attribute stores a pointer
to this object.
Sampled position
Probability density at the sample
Associated time value
Optional: 2D sample position associated with the record
In some uses of this record, a sampled position may be associated with
an important 2D quantity, such as the texture coordinates on a
triangle mesh or a position on the aperture of a sensor. When
applicable, such positions are stored in the uv
attribute.
size
(int):no description available
mitsuba.render.PositionSample3f
:no description available
Base class: mitsuba.render.PositionSample3f
Record for solid-angle based area sampling techniques
This data structure is used in techniques that sample positions relative to a fixed reference position in the scene. For instance, direct illumination strategies importance sample the incident radiance received by a given surface location. Mitsuba uses this approach in a wider bidirectional sense: sampling the incident importance due to a sensor also uses the same data structures and strategies, which are referred to as direct sampling.
This record inherits all fields from PositionSample and extends it with two useful quantities that are cached so that they don’t need to be recomputed: the unit direction and distance from the reference position to the sampled point.
Construct an unitialized direct sample
Construct from a position sample
other
(mitsuba.render.PositionSample3f
):no description available
Copy constructor
other
(mitsuba.render.DirectionSample3f
):no description available
Element-by-element constructor
p
(enoki.scalar.Vector3f):no description available
n
(enoki.scalar.Vector3f):no description available
uv
(enoki.scalar.Vector2f):no description available
time
(float):no description available
pdf
(float):no description available
delta
(bool):no description available
object
(mitsuba.core.Object
):no description available
d
(enoki.scalar.Vector3f):no description available
dist
(float):no description available
Create a position sampling record from a surface intersection
This is useful to determine the hypothetical sampling density on a surface after hitting it using standard ray tracing. This happens for instance in path tracing with multiple importance sampling.
si
(mitsuba.render.SurfaceInteraction3f
):no description available
ref
(mitsuba.render.Interaction3f
):no description available
Unit direction from the reference point to the target shape
Distance from the reference point to the target shape
Setup this record so that it can be used to query the density of a surface position (where the reference point lies on a surface).
ray
(mitsuba.core.Ray3f
):Reference to the ray that generated the intersection si
. The
ray origin must be located at the reference surface and point
towards si
.p.
si
(mitsuba.render.SurfaceInteraction3f
):A surface intersection record (usually on an emitter).
note Defined in scene.h
no description available
size
(int):no description available
mitsuba.render.DirectionSample3f
:no description available
Base class: mitsuba.render.Interaction3f
Stores information related to a medium scattering interaction
Pointer to the associated medium
Shading frame
Convert a world-space vector into local shading coordinates
v
(enoki.scalar.Vector3f):no description available
no description available
Convert a local shading-space vector into world space
v
(enoki.scalar.Vector3f):no description available
no description available
Incident direction in the local shading frame
size
(int):no description available
mitsuba.render.MediumInteraction3f
:no description available
Base class: mitsuba.render.Interaction3f
Stores information related to a surface scattering interaction
Construct from a position sample. Unavailable fields such as wi
and
the partial derivatives are left uninitialized. The shape
pointer is
left uninitialized because we can’t guarantee that the given
PositionSample::object points to a Shape instance.
Construct from a position sample. Unavailable fields such as wi
and
the partial derivatives are left uninitialized. The shape
pointer is
left uninitialized because we can’t guarantee that the given
PositionSample::object points to a Shape instance.
ps
(mitsuba.render.PositionSample
):no description available
wavelengths
(enoki.scalar.Vector0f):no description available
Returns the BSDF of the intersected shape.
The parameter ray
must match the one used to create the
interaction record. This function computes texture coordinate partials
if this is required by the BSDF (e.g. for texture filtering).
Implementation in ‘bsdf.h’
ray
(mitsuba.core.RayDifferential3f
):no description available
mitsuba.render.BSDF
:no description available
mitsuba.render.BSDF
:no description available
Computes texture coordinate partials
ray
(mitsuba.core.RayDifferential3f
):no description available
no description available
Normal partials wrt. the UV parameterization
Normal partials wrt. the UV parameterization
Position partials wrt. the UV parameterization
Position partials wrt. the UV parameterization
UV partials wrt. changes in screen-space
UV partials wrt. changes in screen-space
Return the emitter associated with the intersection (if any) note Defined in scene.h
scene
(mitsuba.render.Scene
):no description available
active
(bool):Mask to specify active lanes.
mitsuba.render.Emitter
:no description available
no description available
no description available
Stores a pointer to the parent instance (if applicable)
Does the surface mark a transition between two media?
no description available
Is the intersected shape also a sensor?
no description available
Geometric normal
Primitive index, e.g. the triangle ID (if applicable)
Shading frame
Pointer to the associated shape
Determine the target medium
When is_medium_transition()
= True
, determine the medium that
contains the ray(this->p, d)
d
(enoki.scalar.Vector3f):no description available
mitsuba.render.Medium
:no description available
Determine the target medium based on the cosine of the angle between the geometric normal and a direction
Returns the exterior medium when cos_theta > 0
and the interior
medium when cos_theta <= 0
.
cos_theta
(float):no description available
mitsuba.render.Medium
:no description available
Convert a world-space vector into local shading coordinates
v
(enoki.scalar.Vector3f):no description available
no description available
Converts a Mueller matrix defined in world space to a local frame
A Mueller matrix operates from the (implicitly) defined frame stokes_basis(in_forward) to the frame stokes_basis(out_forward). This method converts a Mueller matrix defined on directions in world-space to a Mueller matrix defined in the local frame.
This expands to a no-op in non-polarized modes.
in_forward_local
:Incident direction (along propagation direction of light), given in world-space coordinates.
wo_local
:Outgoing direction (along propagation direction of light), given in world-space coordinates.
M_world
(enoki.scalar.Vector3f):no description available
wi_world
(enoki.scalar.Vector3f):no description available
wo_world
(enoki.scalar.Vector3f):no description available
Equivalent Mueller matrix that operates in local frame coordinates.
Convert a local shading-space vector into world space
v
(enoki.scalar.Vector3f):no description available
no description available
Converts a Mueller matrix defined in a local frame to world space
A Mueller matrix operates from the (implicitly) defined frame stokes_basis(in_forward) to the frame stokes_basis(out_forward). This method converts a Mueller matrix defined on directions in the local frame to a Mueller matrix defined on world-space directions.
This expands to a no-op in non-polarized modes.
M_local
(enoki.scalar.Vector3f):The Mueller matrix in local space, e.g. returned by a BSDF.
in_forward_local
:Incident direction (along propagation direction of light), given in local frame coordinates.
wo_local
(enoki.scalar.Vector3f):Outgoing direction (along propagation direction of light), given in local frame coordinates.
wi_local
(enoki.scalar.Vector3f):no description available
Equivalent Mueller matrix that operates in world-space coordinates.
UV surface coordinates
Incident direction in the local shading frame
size
(int):no description available
mitsuba.render.SurfaceInteraction3f
:no description available
Constructs the Mueller matrix of an ideal absorber
value
(float):The amount of absorption.
no description available
Constructs the Mueller matrix of an ideal absorber
value
(enoki.scalar.Vector3f):The amount of absorption.
mitsuba.render.Color
:no description available
Constructs the Mueller matrix of an ideal depolarizer
value
(float):The value of the (0, 0) element
no description available
Constructs the Mueller matrix of an ideal depolarizer
value
(enoki.scalar.Vector3f):The value of the (0, 0) element
mitsuba.render.Color
:no description available
Constructs the Mueller matrix of a linear diattenuator, which attenuates the electric field components at 0 and 90 degrees by ‘x’ and ‘y’, * respectively.
x
(float):no description available
y
(float):no description available
no description available
Constructs the Mueller matrix of a linear diattenuator, which attenuates the electric field components at 0 and 90 degrees by ‘x’ and ‘y’, * respectively.
x
(enoki.scalar.Vector3f):no description available
y
(enoki.scalar.Vector3f):no description available
mitsuba.render.Color
:no description available
Constructs the Mueller matrix of a linear polarizer which transmits linear polarization at 0 degrees.
“Polarized Light” by Edward Collett, Ch. 5 eq. (13)
value
(float):The amount of attenuation of the transmitted component (1 corresponds to an ideal polarizer).
no description available
Constructs the Mueller matrix of a linear polarizer which transmits linear polarization at 0 degrees.
“Polarized Light” by Edward Collett, Ch. 5 eq. (13)
value
(enoki.scalar.Vector3f):The amount of attenuation of the transmitted component (1 corresponds to an ideal polarizer).
mitsuba.render.Color
:no description available
Constructs the Mueller matrix of a linear retarder which has its fast aligned vertically.
This implements the general case with arbitrary phase shift and can be used to construct the common special cases of quarter-wave and half- wave plates.
“Polarized Light” by Edward Collett, Ch. 5 eq. (27)
phase
(float):The phase difference between the fast and slow axis
no description available
Constructs the Mueller matrix of a linear retarder which has its fast aligned vertically.
This implements the general case with arbitrary phase shift and can be used to construct the common special cases of quarter-wave and half- wave plates.
“Polarized Light” by Edward Collett, Ch. 5 eq. (27)
phase
(enoki.scalar.Vector3f):The phase difference between the fast and slow axis
mitsuba.render.Color
:no description available
Reverse direction of propagation of the electric field. Also used for reflecting reference frames.
M
(enoki.scalar.Matrix4f):no description available
no description available
Reverse direction of propagation of the electric field. Also used for reflecting reference frames.
M
(enoki::Matrix<mitsuba.render.Color
):no description available
mitsuba.render.Color
:no description available
Return the Mueller matrix for some new reference frames. This version rotates the input/output frames independently.
This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘in_basis_current’ to ‘out_basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘in_basis_target’ to ‘out_basis_target’.
M
(enoki.scalar.Matrix4f):The current Mueller matrix that operates from in_basis_current
to out_basis_current
.
in_forward
(enoki.scalar.Vector3f):Direction of travel for input Stokes vector (normalized)
in_basis_current
(enoki.scalar.Vector3f):Current (normalized) input Stokes basis. Must be orthogonal to
in_forward
.
in_basis_target
(enoki.scalar.Vector3f):Target (normalized) input Stokes basis. Must be orthogonal to
in_forward
.
out_forward
(enoki.scalar.Vector3f):Direction of travel for input Stokes vector (normalized)
out_basis_current
(enoki.scalar.Vector3f):Current (normalized) input Stokes basis. Must be orthogonal to
out_forward
.
out_basis_target
(enoki.scalar.Vector3f):Target (normalized) input Stokes basis. Must be orthogonal to
out_forward
.
New Mueller matrix that operates from in_basis_target
to
out_basis_target
.
Return the Mueller matrix for some new reference frames. This version rotates the input/output frames independently.
This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘in_basis_current’ to ‘out_basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘in_basis_target’ to ‘out_basis_target’.
M
(enoki::Matrix<mitsuba.render.Color
):The current Mueller matrix that operates from in_basis_current
to out_basis_current
.
in_forward
(enoki.scalar.Vector3f):Direction of travel for input Stokes vector (normalized)
in_basis_current
(enoki.scalar.Vector3f):Current (normalized) input Stokes basis. Must be orthogonal to
in_forward
.
in_basis_target
(enoki.scalar.Vector3f):Target (normalized) input Stokes basis. Must be orthogonal to
in_forward
.
out_forward
(enoki.scalar.Vector3f):Direction of travel for input Stokes vector (normalized)
out_basis_current
(enoki.scalar.Vector3f):Current (normalized) input Stokes basis. Must be orthogonal to
out_forward
.
out_basis_target
(enoki.scalar.Vector3f):Target (normalized) input Stokes basis. Must be orthogonal to
out_forward
.
mitsuba.render.Color
:New Mueller matrix that operates from in_basis_target
to
out_basis_target
.
Return the Mueller matrix for some new reference frames. This version applies the same rotation to the input/output frames.
This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘basis_current’ to ‘basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘basis_target’ to ‘basis_target’.
M
(enoki.scalar.Matrix4f):The current Mueller matrix that operates from basis_current
to
basis_current
.
forward
(enoki.scalar.Vector3f):Direction of travel for input Stokes vector (normalized)
basis_current
(enoki.scalar.Vector3f):Current (normalized) input Stokes basis. Must be orthogonal to
forward
.
basis_target
(enoki.scalar.Vector3f):Target (normalized) input Stokes basis. Must be orthogonal to
forward
.
New Mueller matrix that operates from basis_target
to
basis_target
.
Return the Mueller matrix for some new reference frames. This version applies the same rotation to the input/output frames.
This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘basis_current’ to ‘basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘basis_target’ to ‘basis_target’.
M
(enoki::Matrix<mitsuba.render.Color
):The current Mueller matrix that operates from basis_current
to
basis_current
.
forward
(enoki.scalar.Vector3f):Direction of travel for input Stokes vector (normalized)
basis_current
(enoki.scalar.Vector3f):Current (normalized) input Stokes basis. Must be orthogonal to
forward
.
basis_target
(enoki.scalar.Vector3f):Target (normalized) input Stokes basis. Must be orthogonal to
forward
.
mitsuba.render.Color
:New Mueller matrix that operates from basis_target
to
basis_target
.
Gives the Mueller matrix that alignes the reference frames (defined by their respective basis vectors) of two collinear stokes vectors.
If we have a stokes vector s_current expressed in ‘basis_current’, we can re-interpret it as a stokes vector rotate_stokes_basis(..) * s1 that is expressed in ‘basis_target’ instead. For example: Horizontally polarized light [1,1,0,0] in a basis [1,0,0] can be interpreted as +45˚ linear polarized light [1,0,1,0] by switching to a target basis [0.707, -0.707, 0].
forward
:Direction of travel for Stokes vector (normalized)
basis_current
(enoki.scalar.Vector3f):Current (normalized) Stokes basis. Must be orthogonal to
forward
.
basis_target
(enoki.scalar.Vector3f):Target (normalized) Stokes basis. Must be orthogonal to
forward
.
wi
(enoki.scalar.Vector3f):no description available
Mueller matrix that performs the desired change of reference frames.
Gives the Mueller matrix that alignes the reference frames (defined by their respective basis vectors) of two collinear stokes vectors.
If we have a stokes vector s_current expressed in ‘basis_current’, we can re-interpret it as a stokes vector rotate_stokes_basis(..) * s1 that is expressed in ‘basis_target’ instead. For example: Horizontally polarized light [1,1,0,0] in a basis [1,0,0] can be interpreted as +45˚ linear polarized light [1,0,1,0] by switching to a target basis [0.707, -0.707, 0].
forward
:Direction of travel for Stokes vector (normalized)
basis_current
(enoki.scalar.Vector3f):Current (normalized) Stokes basis. Must be orthogonal to
forward
.
basis_target
(enoki.scalar.Vector3f):Target (normalized) Stokes basis. Must be orthogonal to
forward
.
wi
(enoki.scalar.Vector3f):no description available
mitsuba.render.Color
:Mueller matrix that performs the desired change of reference frames.
Applies a counter-clockwise rotation to the mueller matrix of a given element.
theta
(float):no description available
M
(enoki.scalar.Matrix4f):no description available
no description available
Applies a counter-clockwise rotation to the mueller matrix of a given element.
theta
(enoki.scalar.Vector3f):no description available
M
(enoki::Matrix<mitsuba.render.Color
):no description available
mitsuba.render.Color
:no description available
Constructs the Mueller matrix of an ideal rotator, which performs a counter-clockwise rotation of the electric field by ‘theta’ radians (when facing the light beam from the sensor side).
To be more precise, it rotates the reference frame of the current Stokes vector. For example: horizontally linear polarized light s1 = [1,1,0,0] will look like -45˚ linear polarized light s2 = R(45˚) * s1 = [1,0,-1,0] after applying a rotator of +45˚ to it.
“Polarized Light” by Edward Collett, Ch. 5 eq. (43)
theta
(float):no description available
no description available
Constructs the Mueller matrix of an ideal rotator, which performs a counter-clockwise rotation of the electric field by ‘theta’ radians (when facing the light beam from the sensor side).
To be more precise, it rotates the reference frame of the current Stokes vector. For example: horizontally linear polarized light s1 = [1,1,0,0] will look like -45˚ linear polarized light s2 = R(45˚) * s1 = [1,0,-1,0] after applying a rotator of +45˚ to it.
“Polarized Light” by Edward Collett, Ch. 5 eq. (43)
theta
(enoki.scalar.Vector3f):no description available
mitsuba.render.Color
:no description available
Calculates the Mueller matrix of a specular reflection at an interface between two dielectrics or conductors.
cos_theta_i
(float):Cosine of the angle between the surface normal and the incident ray
eta
(enoki.scalar.Complex2f):Complex-valued relative refractive index of the interface. In the real case, a value greater than 1.0 case means that the surface normal points into the region of lower density.
no description available
Calculates the Mueller matrix of a specular reflection at an interface between two dielectrics or conductors.
cos_theta_i
(enoki.scalar.Vector3f):Cosine of the angle between the surface normal and the incident ray
eta
(enoki::Complex<mitsuba.render.Color
):Complex-valued relative refractive index of the interface. In the real case, a value greater than 1.0 case means that the surface normal points into the region of lower density.
mitsuba.render.Color
:no description available
Calculates the Mueller matrix of a specular transmission at an interface between two dielectrics or conductors.
cos_theta_i
(float):Cosine of the angle between the surface normal and the incident ray
eta
(float):Complex-valued relative refractive index of the interface. A value greater than 1.0 in the real case means that the surface normal is pointing into the region of lower density.
no description available
Calculates the Mueller matrix of a specular transmission at an interface between two dielectrics or conductors.
cos_theta_i
(enoki.scalar.Vector3f):Cosine of the angle between the surface normal and the incident ray
eta
(enoki.scalar.Vector3f):Complex-valued relative refractive index of the interface. A value greater than 1.0 in the real case means that the surface normal is pointing into the region of lower density.
mitsuba.render.Color
:no description available
Gives the reference frame basis for a Stokes vector.
For light transport involving polarized quantities it is essential to keep track of reference frames. A Stokes vector is only meaningful if we also know w.r.t. which basis this state of light is observed. In Mitsuba, these reference frames are never explicitly stored but instead can be computed on the fly using this function.
w
(enoki.scalar.Vector3f):Direction of travel for Stokes vector (normalized)
The (implicitly defined) reference coordinate system basis for the Stokes vector travelling along w.
a
(enoki.scalar.Vector3f):no description available
b
(enoki.scalar.Vector3f):no description available
no description available
Members:
No flags set
Compute position and geometric normal
Compute UV coordinates
Compute position partials wrt. UV coordinates
Compute the geometric normal partials wrt. the UV coordinates
Compute the shading normal partials wrt. the UV coordinates
Compute shading normal and shading frame
Force computed fields to not be be differentiable
Compute all fields of the surface interaction data structure (default)
Compute all fields of the surface interaction data structure in a non differentiable way
arg0
(int):no description available
Base class: mitsuba.core.Object
Storage for an image sub-block (a.k.a render bucket)
This class is used by image-based parallel processes and encapsulates computed rectangular regions of an image. This allows for easy and efficient distributed rendering of large images. Image blocks usually also include a border region storing contributions that are slightly outside of the block, which is required to support image reconstruction filters.
size
(enoki.scalar.Vector2i):no description available
channel_count
(int):no description available
filter
(mitsuba.core.ReconstructionFilter
):no description available
warn_negative
(bool):no description available
warn_invalid
(bool):no description available
border
(bool):no description available
normalize
(bool):no description available
Return the border region used by the reconstruction filter
no description available
Return the number of channels stored by the image block
no description available
Clear everything to zero.
no description available
Return the underlying pixel buffer
no description available
Return the bitmap’s height in pixels
no description available
Return the current block offset
no description available
Accumulate another image block into this one
block
(mitsuba.render.ImageBlock
):no description available
Store a single sample / packets of samples inside the image block.
note This method is only valid if a reconstruction filter was given at the construction of the block.
pos
(enoki.scalar.Vector2f):Denotes the sample position in fractional pixel coordinates. It is not checked, and so must be valid. The block’s offset is subtracted from the given position to obtain the
wavelengths
(enoki.scalar.Vector0f):Sample wavelengths in nanometers
value
(enoki.scalar.Vector3f):Sample value assocated with the specified wavelengths
alpha
(float):Alpha value assocated with the sample
False
if one of the sample values was invalid, e.g. NaN or
negative. A warning is also printed if m_warn_negative
or
m_warn_invalid
is enabled.
active
(bool):Mask to specify active lanes.
pos
(enoki.scalar.Vector2f):no description available
data
(List[float]):no description available
active
(bool):Mask to specify active lanes.
Set the current block offset.
This corresponds to the offset from the top-left corner of a larger image (e.g. a Film) to the top-left corner of this ImageBlock instance.
offset
(enoki.scalar.Vector2i):no description available
no description available
Warn when writing invalid (NaN, +/- infinity) sample values?
value
(bool):no description available
no description available
Warn when writing negative sample values?
value
(bool):no description available
no description available
Return the current block size
no description available
Warn when writing invalid (NaN, +/- infinity) sample values?
no description available
Warn when writing negative sample values?
no description available
Return the bitmap’s width in pixels
no description available
Base class: mitsuba.core.Object
Abstract integrator base class, which does not make any assumptions with regards to how radiance is computed.
In Mitsuba, the different rendering techniques are collectively referred to as integrators, since they perform integration over a high-dimensional space. Each integrator represents a specific approach for solving the light transport equation—usually favored in certain scenarios, but at the same time affected by its own set of intrinsic limitations. Therefore, it is important to carefully select an integrator based on user-specified accuracy requirements and properties of the scene to be rendered.
This is the base class of all integrators; it does not make any assumptions on how radiance is computed, which allows for many different kinds of implementations.
Cancel a running render job
This function can be called asynchronously to cancel a running render
job. In this case, render() will quit with a return value of
False
.
no description available
Perform the main rendering job. Returns True
upon success
scene
(mitsuba.render.Scene
):no description available
sensor
(mitsuba.render.Sensor
):no description available
no description available
Generic surface interaction data structure
Is the current interaction valid?
no description available
Position of the interaction in world coordinates
Spawn a semi-infinite ray towards the given direction
d
(enoki.scalar.Vector3f):no description available
mitsuba.core.Ray3f
:no description available
Spawn a finite ray towards the given position
t
(enoki.scalar.Vector3f):no description available
mitsuba.core.Ray3f
:no description available
Distance traveled along the ray
Time value associated with the interaction
Wavelengths associated with the ray that produced this interaction
size
(int):no description available
mitsuba.render.Interaction3f
:no description available
Base class: mitsuba.render.SamplingIntegrator
Stores preliminary information related to a ray intersection
This data structure is used as return type for the Shape::ray_intersect_preliminary efficient ray intersection routine. It stores whether the shape is intersected by a given ray, and cache preliminary information about the intersection if that is the case.
If the intersection is deemed relevant, detailed intersection information can later be obtained via the create_surface_interaction() method.
Compute and return detailed information related to a surface interaction
ray
(mitsuba.core.Ray3f
):Ray associated with the ray intersection
flags
(mitsuba.render.HitComputeFlags
):Flags specifying which information should be computed
active
(bool):Mask to specify active lanes.
mitsuba.render.SurfaceInteraction3f
:A data structure containing the detailed information
Stores a pointer to the parent instance (if applicable)
Is the current interaction valid?
no description available
Primitive index, e.g. the triangle ID (if applicable)
2D coordinates on the primitive surface parameterization
Pointer to the associated shape
Shape index, e.g. the shape ID in shapegroup (if applicable)
Distance traveled along the ray
Base class: mitsuba.render.Integrator
Integrator based on Monte Carlo sampling
This integrator performs Monte Carlo integration to return an unbiased statistical estimate of the radiance value along a given ray. The default implementation of the render() method then repeatedly invokes this estimator to compute all pixels of the image.
arg0
(mitsuba.core.Properties
):no description available
For integrators that return one or more arbitrary output variables (AOVs), this function specifies a list of associated channel names. The default implementation simply returns an empty vector.
no description available
Sample the incident radiance along a ray.
scene
(mitsuba.render.Scene
):The underlying scene in which the radiance function should be sampled
sampler
(mitsuba.render.Sampler
):A source of (pseudo-/quasi-) random numbers
ray
(mitsuba.core.RayDifferential3f
):A ray, optionally with differentials
medium
(mitsuba.render.Medium
):If the ray is inside a medium, this parameter holds a pointer to that medium
active
(bool):A mask that indicates which SIMD lanes are active
aov
:Integrators may return one or more arbitrary output variables
(AOVs) via this parameter. If nullptr
is provided to this
argument, no AOVs should be returned. Otherwise, the caller
guarantees that space for at least aov_names().size()
entries
has been allocated.
A pair containing a spectrum and a mask specifying whether a surface or medium interaction was sampled. False mask entries indicate that the ray “escaped” the scene, in which case the the returned spectrum contains the contribution of environment maps, if present. The mask can be used to estimate a suitable alpha channel of a rendered image.
In the Python bindings, this function returns the aov
output
argument as an additional return value. In other words: `` (spec,
mask, aov) = integrator.sample(scene, sampler, ray, medium,
active) ``
Indicates whether cancel() or a timeout have occured. Should be checked regularly in the integrator’s main loop so that timeouts are enforced accurately.
Note that accurate timeouts rely on m_render_timer, which needs to be reset at the beginning of the rendering phase.
no description available
Base class: mitsuba.core.Object
Generates a spiral of blocks to be rendered.
Adam Arbree Aug 25, 2005 RayTracer.java Used with permission. Copyright 2005 Program of Computer Graphics, Cornell University
Create a new spiral generator for the given size, offset into a larger frame, and block size
size
(enoki.scalar.Vector2i):no description available
offset
(enoki.scalar.Vector2i):no description available
block_size
(int):no description available
passes
(int):no description available
Return the total number of blocks
no description available
Return the maximum block size
no description available
Return the offset, size and unique identifer of the next block.
A size of zero indicates that the spiral traversal is done.
no description available
Reset the spiral to its initial state. Does not affect the number of passes.
no description available
Sets the number of time the spiral should automatically reset. Not affected by a call to reset.
arg0
(int):no description available
no description available
Base class: mitsuba.core.Object
Base class of all surface texture implementations
This class implements a generic texture map that supports evaluation at arbitrary surface positions and wavelengths (if compiled in spectral mode). It can be used to provide both intensities (e.g. for light sources) and unitless reflectance parameters (e.g. an albedo of a reflectance model).
The spectrum can be evaluated at arbitrary (continuous) wavelengths, though the underlying function it is not required to be smooth or even continuous.
scale
(float):no description available
mitsuba.render.Texture
:no description available
Evaluate the texture at the given surface interaction
si
(mitsuba.render.SurfaceInteraction3f
):An interaction record describing the associated surface position
active
(bool):Mask to specify active lanes.
An unpolarized spectral power distribution or reflectance value
Monochromatic evaluation of the texture at the given surface interaction
This function differs from eval() in that it provided raw access to scalar intensity/reflectance values without any color processing (e.g. spectral upsampling). This is useful in parts of the renderer that encode scalar quantities using textures, e.g. a height field.
si
(mitsuba.render.SurfaceInteraction3f
):An interaction record describing the associated surface position
active
(bool):Mask to specify active lanes.
An scalar intensity or reflectance value
Trichromatic evaluation of the texture at the given surface interaction
This function differs from eval() in that it provided raw access to RGB intensity/reflectance values without any additional color processing (e.g. RGB-to-spectral upsampling). This is useful in parts of the renderer that encode 3D quantities using textures, e.g. a normal map.
si
(mitsuba.render.SurfaceInteraction3f
):An interaction record describing the associated surface position
active
(bool):Mask to specify active lanes.
An trichromatic intensity or reflectance value
Does this texture evaluation depend on the UV coordinates
no description available
Return the mean value of the spectrum over the support (MTS_WAVELENGTH_MIN..MTS_WAVELENGTH_MAX)
Not every implementation necessarily provides this function. The default implementation throws an exception.
Even if the operation is provided, it may only return an approximation.
no description available
Returns the probability per unit area of sample_position()
p
(enoki.scalar.Vector2f):no description available
active
(bool):Mask to specify active lanes.
no description available
Evaluate the density function of the sample_spectrum() method as a probability per unit wavelength (in units of 1/nm).
Not every implementation necessarily provides this function. The default implementation throws an exception.
si
(mitsuba.render.SurfaceInteraction3f
):An interaction record describing the associated surface position
active
(bool):Mask to specify active lanes.
A density value for each wavelength in si.wavelengths
(hence
the Wavelength type).
Importance sample a surface position proportional to the overall spectral reflectance or intensity of the texture
This function assumes that the texture is implemented as a mapping from 2D UV positions to texture values, which is not necessarily true for all textures (e.g. 3D noise functions, mesh attributes, etc.). For this reason, not every will plugin provide a specialized implementation, and the default implementation simply return the input sample (i.e. uniform sampling is used).
sample
(enoki.scalar.Vector2f):A 2D vector of uniform variates
active
(bool):Mask to specify active lanes.
A texture-space position in the range \([0, 1]^2\)
The associated probability per unit area in UV space
Importance sample a set of wavelengths proportional to the spectrum defined at the given surface position
Not every implementation necessarily provides this function, and it is a no-op when compiling non-spectral variants of Mitsuba. The default implementation throws an exception.
si
(mitsuba.render.SurfaceInteraction3f
):An interaction record describing the associated surface position
sample
(enoki.scalar.Vector0f):A uniform variate for each desired wavelength.
active
(bool):Mask to specify active lanes.
Set of sampled wavelengths specified in nanometers
2. The Monte Carlo importance weight (Spectral power distribution value divided by the sampling density)
type
(mitsuba.render.MicrofacetType
):no description available
alpha_u
(float):no description available
alpha_v
(float):no description available
wi
(mitsuba.render.Vector
):no description available
eta
(float):no description available
no description available
Calculates the unpolarized Fresnel reflection coefficient at a planar interface between two dielectrics
cos_theta_i
(float):Cosine of the angle between the surface normal and the incident ray
eta
(float):Relative refractive index of the interface. A value greater than 1.0 means that the surface normal is pointing into the region of lower density.
A tuple (F, cos_theta_t, eta_it, eta_ti) consisting of
F Fresnel reflection coefficient.
cos_theta_t Cosine of the angle between the surface normal and the transmitted ray
eta_it Relative index of refraction in the direction of travel.
eta_ti Reciprocal of the relative index of refraction in the direction of travel. This also happens to be equal to the scale factor that must be applied to the X and Y component of the refracted direction.
Calculates the unpolarized Fresnel reflection coefficient at a planar interface of a conductor, i.e. a surface with a complex-valued relative index of refraction
The implementation assumes that cos_theta_i > 0, i.e. light enters from outside of the conducting layer (generally a reasonable assumption unless very thin layers are being simulated)
cos_theta_i
(float):Cosine of the angle between the surface normal and the incident ray
eta
(enoki.scalar.Complex2f):Relative refractive index (complex-valued)
The unpolarized Fresnel reflection coefficient.
Calculates the polarized Fresnel reflection coefficient at a planar interface between two dielectrics or conductors. Returns complex values encoding the amplitude and phase shift of the s- and p-polarized waves.
This is the most general version, which subsumes all others (at the cost of transcendental function evaluations in the complex-valued arithmetic)
cos_theta_i
(float):Cosine of the angle between the surface normal and the incident ray
eta
(enoki.scalar.Complex2f):Complex-valued relative refractive index of the interface. In the real case, a value greater than 1.0 case means that the surface normal points into the region of lower density.
A tuple (a_s, a_p, cos_theta_t, eta_it, eta_ti) consisting of
a_s Perpendicularly polarized wave amplitude and phase shift.
a_p Parallel polarized wave amplitude and phase shift.
cos_theta_t Cosine of the angle between the surface normal and the transmitted ray. Zero in the case of total internal reflection.
eta_it Relative index of refraction in the direction of travel
eta_ti Reciprocal of the relative index of refraction in the direction of travel. In the real-valued case, this also happens to be equal to the scale factor that must be applied to the X and Y component of the refracted direction.
arg0
(mitsuba.render.HitComputeFlags
):no description available
arg1
(mitsuba.render.HitComputeFlags
):no description available
no description available
arg0
(int):no description available
arg1
(mitsuba.render.BSDFFlags
):no description available
no description available
arg0
(int):no description available
arg1
(mitsuba.render.PhaseFunctionFlags
):no description available
no description available
Reflection in local coordinates
wi
(enoki.scalar.Vector3f):no description available
no description available
Reflect wi
with respect to a given surface normal
wi
(enoki.scalar.Vector3f):no description available
m
(enoki.scalar.Vector3f):no description available
no description available
Refraction in local coordinates
The ‘cos_theta_t’ and ‘eta_ti’ parameters are given by the last two tuple entries returned by the fresnel and fresnel_polarized functions.
wi
(enoki.scalar.Vector3f):no description available
cos_theta_t
(float):no description available
eta_ti
(float):no description available
no description available
Refract wi
with respect to a given surface normal
wi
(enoki.scalar.Vector3f):Direction to refract
m
(enoki.scalar.Vector3f):Surface normal
cos_theta_t
(float):Cosine of the angle between the normal the transmitted ray, as computed e.g. by fresnel.
eta_ti
(float):Relative index of refraction (transmitted / incident)
no description available
arg0
(str):no description available
arg1
(Callable[[mitsuba.core.Properties
], object]):no description available
no description available
arg0
(str):no description available
arg1
(Callable[[mitsuba.core.Properties
], object]):no description available
no description available
arg0
(str):no description available
arg1
(Callable[[mitsuba.core.Properties
], object]):no description available
no description available
arg0
(str):no description available
arg1
(Callable[[mitsuba.core.Properties
], object]):no description available
no description available
arg0
(str):no description available
arg1
(Callable[[mitsuba.core.Properties
], object]):no description available
no description available
arg0
(str):no description available
arg1
(Callable[[mitsuba.core.Properties
], object]):no description available
no description available
arg0
(enoki.scalar.Vector3f):no description available
arg1
(enoki.scalar.Vector0f):no description available
no description available
Look up the model coefficients for a sRGB color value @param c An sRGB color value where all components are in [0, 1]. @return Coefficients for use with srgb_model_eval
arg0
(enoki.scalar.Vector3f):no description available
no description available
arg0
(enoki.scalar.Vector3f):no description available
no description available