Extensible 3D (X3D)
Part 1: Architecture and base components

X Volume Rendering Component (Extension Proposal)

--- X3D separator bar ---

cube X.1. Introduction

X.1.1 Name

The name of this component is "VolumeRendering". This name shall be used when referring to this component in the COMPONENT statement (see 7.2.5.4 Component statement).

X.1.2 Overview

This component provides the ability to specify and render volumetric data sets.Table X.1 provides links to the major topics in this clause.

Table X.1 — Topics

cube X.2 Concepts

X.2.1 Overview

Volume Rendering is an alternate form of visual data representation compared to the traditional polygonal form used in the rest of this specification. Where polygons represent an infinitely thin plane, volume data represent a three dimensional block of space that contains some data. When polygonal data representing a volume in space is sliced, such as with a clipping plane, there is empty space. In the same situation volumetric data will show the internals of that volume.

There are many different techniques for implementing rendering of volumetric data. This component does not define the technique used to render the data, only the type of visual output needed. In addition, it defines several different types of data representations for which the renderings may be applied. In order to implement some of the higher complexity representations, the implementor may need to use a more complex rendering technique than the simpler representations, though it is not required. Each of the rendering nodes will represent the visual output required, not the technique used to implement it. Most of the rendering styles defined in this component are formally defined in [FOLEY].

X.2.2 Representing Volumetric Data

X.2.2.1 Coordinate System

The coordinate system places the textures in the volume such that each 2D texture slice lies in the X-Y plane, with the depth increasing away from the viewer along the +Z axis. Note that this, effectively, inverts the 3D texture coordinates for the R axis direction, which defines them to have depth increasing along the -Z axis (See Figure 33.1). The volume is centered around the local origin and is subject to the parent transformation heirarchy, including scales, shears and rotations.

X.2.2.1 Registration and Scaling

Volumetric data represents volume information that typically comes from the real world: for example human body scans or finite element analysis of an engine part. The volumetric data is typically part of a larger environment space and thus needs to be located within that space, so that volumes for different parts (eg arm and leg of a single human) may be presented in a spatially correct manner. Typically volumes are not a unit cube in size, so extra dimensional information must be provided with the volume to indicate its true size in the local coordinate system.

X.2.2.2 Data Representation

X.2.2.2.1 3D Texture Definition

Volume rendering requires providing data in a volumetric form. This component uses the 3D texturing component (See ISO/IEC 19775-1 PDAM-1 3D-Texturing) to represent the raw volume data, but without rendering that data directly onto polygonal surfaces. Volumetric rendering may make use of multiple 3D textures to generate a final visual form.

Data may be represented using between 1 and 4 colour components. How each colour component is to be interpreted as part of the rendering shall be defined for each node. Some nodes may require a specific minimum number of components, or define that anything more than a specific number will be ignored. Providing extra data may not be helpful to the implementation. In cases where not enough components are provided (eg a surface normal texture only being defined with a 1 or 2 component colour image), the entire data source is to be ignored

X.2.2.2.2 Vector and Normal Representation

Some nodes make use of 3D textures to convey data other than colour, such as normal or vector information. For the purposes of representing 3D information, the texture components shall be interpreted as defined by Table X.2.

Table X.2 — Mapping of texture colour components to 3D coordinates

Color Component3D Coordinate
RedX
GreenY
BlueZ
AlphaIgnored

If the texture provided for the field does not contain enough colour components for the data to be represented, it shall be ignored and the node's default behaviour used.

If a rendering style requires a surface normal value and must implicitly calculate one, then the normal at a given voxel is the normalised gradient of the scalar field at that voxel location.

X.2.2.2.3 Data Optimisation

An implementation is free to provide whatever data reduction techniques are desired under the covers. Explicit volume data representations are provided in the OctTree node that allows the user to describe progressively more detailed volumetric data. When the user presents data in this form, it shall be followed as the required rendering technique. However, within a specific volume data representation, the implementation may also perform its own optimisation techniques, for example automatic mipmapping.

Volume visualisation data sets are not required to be represented in sizes that are powers of two. Implementations may need to internally pad the texture sizes for passing to the underlying rendering engine, but user-provided content is not required to do this.

X.2.2.3 Segmentation Information

The volume data may optionally represent segmented data sets. Doing so requires representing the data in a slightly different manner than a standard volume data set, so a separate node is used. Segmentation data takes the form of an additional volume of data where each voxel represents a segment ID value in addition to other values represented in each voxel. The segmentation information is used by the rendering process to control how each voxel is to be rendered. It is not unusual to use segmentation information to render each segment identifier with a different style - for example bone using ISO surfaces, skin using tone shading etc.

X.2.2.4 Tensor Representation

This specification does not explicitly handle or represent tensor data. Tensor information may be rendered using the techniques in this profile, even though no direct data is being transmitted. It is recommended that if an application needs to know about the existance of tensor data, that the metadata capabilities of the specification be used.

X.2.2.5 Visual Representation

Volumetric data is typically given as a rectangular block of information. Turning that into something meaningful where structures may be discernable is the job of the rendering process. However, there is not a one-size-fits-all approach to volume rendering. A technique that is good for exposing structures for medical visualisation may be poor for fluid simulation visualisation.

To allow for these different visual outputs, this component separates the scene graph into two sets of responsibilities - nodes for representing the volume data, and nodes for rendering that volume data in different ways. In this way, the same rendering process may be used for different sets of volume data, or varying rendering styles may be used to highlight different structures within the one volume.

Many rendering techniques map the volume data to a visual representation through the use of another texture known as a Transfer function. This secondary texture defines the colours to use, acting as a form of lookup table. Transfer functions can be defined in 1, 2 or 3 dimensions. As X3D does not define a 1-dimensional texture capability, this can be simulated through the use of a 2D texture that is only 1 pixel wide.

X.2.3 Interaction with Other Nodes and Components

X.2.3.1 Overview

Volumetric rendering requires a completely different implementation path to traditional polygonal rendering. The data represents not only surface information, but also colour and potentially lighting. As such, volume rendering occupies the space in the renderable scene graph that is a X3DShapeNode rather than as individual geometry or appearance information.

X.2.3.2 Lighting

Volumetric rendering is not required to follow the standard lighting equations for this specification. Many techniques will include the ability to self-light and self-shadow using information from the parent scene graph (light scoping etc).

The volume data is rendered using one or more rendering styles. Each style defines its own lighting equation that takes the colour and opacity value from the previously evaluated style, modifies it according to the local style rules and generates an output colour and opacity value. The first style applied to the voxel sources the values directly from the voxel data, using the colour or opacity channels as needed (though typically the first style it uses are transfer functions and the OpacityMapVolumeStyle).

Many of the rendering styles involve non-photorealistic effects. Each style will present it's own lighting equation on how to get from the current colour and opacity values to the contributed output colour. These may or may not use the underlying voxel values. The following are some common terms that will be found in the lighting equations:

When determining the view direction for any lighting or rendering calculations, this is calculated from the user's current location in the world to the current voxel being processed. Lighting and rendering style calculations are assumed to be individually calculated for each voxel.

X.2.3.3 Geometry

The volumetric rendering nodes are a leaf node in the renderable tree. Volumetric nodes may exist as part of a shared scene graph with DEF/USE, though it is expected to be very rare to see this in practice.

X.2.4 Conformance

X.2.4.1 Node Support

The minimum required voxel dimensions that shall be supported are 256x256x256.

X.2.4.2 Hardware requirements

There is no specific requirements for hardware accelaration of this component. In addition, this component does not define the specific implementation strategy to be used by a given rendering style. It is equally valid to implement the code using simple multi-pass rendering as it is to use hardware shaders.

X.2.4.3 Scene Graph Interaction

Sensor nodes that require interaction with the geometry (eg TouchSensor) shall provide intersection information based on the volume's bounds for minimum conformance. An implementation may optionally provide real intersection information based on performing ray casting into the volume space and reporting the first non-transparent voxel hit.

Navigation and collision detection shall also require a minimal conformance requirement of using the bounds of the volume. In addtion, the implementation may allow greater precision with non-opaque voxels, in a similar manner to the sensor interactions.

cube X.3 Abstract Types

X.3.1 X3DVolumeNode

X3DVolumeNode : X3DChildNode, X3DBoundedObject {
  SFVec3f [in,out] dimensions       1 1 1     [0,∞)
  SFNode Cool :)  [in,out] metadata         NULL      [X3DMetadataObject]
  SFVec3f []       bboxCenter       0 0 0     (-∞,∞)
  SFVec3f []       bboxSize         -1 -1 -1  [0,∞) or -1 -1 -1
}

This abstract node type is the base type for all node types that describe volumetric data to be rendered. It sits at the same level as the polygonal X3DShapeNode (see ISO/IEC 19774-1 12.3.4 X3DShapeNode) within the scene graph structure, but defines volumetric data rather polygons.

The dimensions field specifies the dimensions of this geometry in the local coordinate space using standard X3D units. It is assumed the volume is centered around the local origin. If the bounding box size is set, it will typically be the same size as the dimensions.

X.3.2 X3DComposableVolumeRenderStyleNode

X3DComposableVolumeRenderStyleNode : X3DVolumeRenderStyleNode : {
  SFBool [in,out] enabled TRUE
}

This abstract node type is the base type for all node types that allow rendering styles to be sequentially composed together to form a single renderable output. The output of one style may be used as the input of the next style. Composition in this manner is performed using the ComposableVolumeStyle node.

X.3.3 X3DVolumeRenderStyleNode

X3DVolumeRenderStyleNode : X3DNode {
  SFBool [in,out] enabled TRUE
}

This abstract node type is the base type for all node types which specify a specific visual rendering style to be used.

The enabled field defines whether this rendering style should be currently applied to the volume data. If the field is set to FALSE, then the rendering shall not be applied at all. The render shall act as though no volume data is rendered when set to FALSE. Effectively, this allows the end user to turn on and off volume rendering of specific parts of the volume without needing to add or remove style definitions from the volume data node.

cube X.4 Node reference

X.4.1 BoundaryEnhancementVolumeStyle

BoundaryEnhancementVolumeStyle : X3DComposableVolumeRenderStyleNode {
  SFBool      [in,out] enabled          TRUE
  SFNode      [in,out] metadata         NULL    [X3DMetadataObject]
  SFNode      [in,out] surfaceNormals   NULL    [X3DTexture3DNode]
  SFFloat     [in,out] retainedOpacity  1       [0,1]
  SFFloat     [in,out] boundaryOpacity  0       [0,∞)
  SFFloat     [in,out] opacityFactor    1       [0,∞)
}

Provides boundary enhancement for the volume rendering style. In this style the colour rendered is based on the gradient magnitude. Faster changing gradients (surface normals) are darker than slower changing. Areas of different density are made more visible relative to parts that are relatively constant density.

The surfaceNormals field is used to provide pre-calculated surface normal information for each voxel. If provided, this shall be used for all lighting calculations. If not provided, the implementation shall automatically generate surface normals using an implementation-specific method. If a value is provided, it shall be exactly the same voxel dimensions as the base volume data that it represents. If the dimension are not identical then the browser shall generate a warning and automatically generate its own internal normals as though no value was provided for this field.

The output colour for this style is obtained by combining a fraction of the volume's original opacity with an enhancement based on the local boundary strength (magnitude of the gradient between adjacent voxels). Colour components from the input are transfered unmodified to the output. The function used is

Cg = Cv Og = Ov ( kgc + kgs(|Δf|)^kge)

where

X.4.2 CartoonVolumeStyle

CartoonVolumeStyle : X3DComposableVolumeRenderStyleNode {
  SFBool   [in,out] enabled          TRUE
  SFNode   [in,out] metadata         NULL   [X3DMetadataObject]
  SFNode   [in,out] surfaceNormals   NULL   [X3DTexture3DNode]
  SFColor  [in,out] parallelColor    0 0 0  [0,1]
  SFColor  [in,out] orthogonalColor  1 1 1  [0,1]
  SFInt32  [in,out] colorSteps       4      [1,64]
}

Uses the cartoon-style nonphotorealistic rendering of the volume. Cartoon rendering uses two colours that are rendered in a series of distinct flat-shaded sections based on the local surface normal's closeness to the average normal, with no gradients in between.

The surfaceNormals field contains a 3D texture with at least 3 component values. Each voxel in the texture represents the surface normal direction for the corresponding voxel in the base data source. This texture should be identical in dimensions to the source data. If not, the implementation may interpolate or average between adjacent voxels to determine the average normal at the voxel required. If this field is empty, the implementation shall automatically determine the surface normal using algorithmic means.

The parallelColor field specifies the colour to be used for surface normals that are orthogonal to the viewer's current location (the plane of the surface itself is parallel to the user's view direction).

The orthogonalColor field specifies the colour to be used for surface normals that are parallel to the viewer's current location (the plane of the surface itself is orthogonal to the user's view direction). Surfaces that are further than orthgonal to the view direction (ie back facing) are not rendered and shall have no colour calculated for them.

The colorSteps field indicates how many distinct colours should be taken from the interpolated colours and used to render the object. If the value is 1, then no colour interpolation takes place, and only the orthogonal colour is used to render the surface with. Any other value and the colours are interploted between parallelColor and orthogonalColor in HSV colour space for the RGB components, and linearly for the alpha component. From this, determine the number of colours using a midpoint calculation.

To determine the colours to be used, the angles for the surface normal relative to the view direction are used. Divide the range π/2 by colorSteps.(The two ends of the spectrum are not interpolated in this way and shall use the specified field values). For each of the ranges , other than the two ends, find the midpoint angle and determine the interpolated colour at that point.

EXAMPLE  using the default field values, the colour ranges would be:

The final output colour is determined by combining this interpolated colour value with the opacity of the incoming opacity. Colour components of the incoming colour are ignored.

X.4.3 ComposedVolumeStyle

ComposedVolumeStyle : X3DVolumeRenderStyleNode {
  SFBool [in,out] enabled      TRUE
  SFNode [in,out] metadata     NULL  [X3DMetadataObject]
  SFBool [in,out] ordered      FALSE
  MFNode [in,out] renderStyle []	 [X3DComposableVolumeRenderStyleNode]
}

A rendering style node that allows compositing multiple styles together into a single rendering pass. This is used, for example to render a simple image with both edge and silhouette styles.

The styles field contains a list of contributing style node references that can be applied to the object. Whether the styles should be strictly rendered in order or not is dependent on the ordered field value. If this field value is FALSE, then the implementation may apply the various styles in any order (or even in parallel if the underlying implementation supports it). If the value is TRUE, then the implementation shall apply each style strictly in the order declared, starting at index 0.

X.4.4 EdgeEnhancementVolumeStyle

EdgeEnhancementVolumeStyle : X3DComposableVolumeRenderStyleNode {
  SFColor  [in,out] edgeColor         0 0 0   [0,1]
  SFBool   [in,out] enabled           TRUE
  SFFloat  [in,out] gradientThreshold 0.4     [0,0 - π/2]
  SFNode   [in,out] metadata          NULL    [X3DMetadataObject]
  SFNode   [in,out] surfaceNormals    NULL    [X3DTexture3DNode]
}

Provides edge enhancement for the volume rendering style. Enhancement of the basic volume is provided by darkening voxels based on the orientation of their surface normal relative to the view direction. Perpendicular normals colour the voxels according to the edgeColor while voxels with parallel normals are not changed at all. A threshold can be set where the proportion of how close to parallel the normal direction needs to be before no colour changes are made.

The gradientThreshold field defines the minimum angle (in radians) away from the view direction vector that the surface normal needs to be before any enhancement is applied.

The edgeColor field defines the colour to be used to highlight the edges.

The surfaceNormals field contains a 3D texture with at least 3 component values. Each voxel in the texture represents the surface normal direction for the corresponding voxel in the base data source. This texture should be identical in dimensions to the source data. If not, the implementation may interpolate or average between adjacent voxels to determine the average normal at the voxel required. If this field is empty, the implementation shall automatically determine the surface normal using algorithmic means.

The final colour is determined by:

Cg = Cv if (|n . V|) >= cos(gradientThreshold) else Cv * (|n . V|) + edgeColor * (1 - (|n . V|)) Og = Ov

X.4.5 ISOSurfaceVolumeData

ISOSurfaceVolumeData : X3DVolumeData {
  SFVec3f [in,out] dimensions       1 1 1     (0,inf)
  SFNode  [in,out] metadata         NULL      [X3DMetadataObject]
  MFNode  [in,out] renderStyle      []        [X3DVolumeRenderStyleNode]
  SFNode  [in,out] gradients        NULL      [X3DTexture3DNode]
  SFNode  [in,out] voxels           NULL      [X3DTexture3DNode]
  SFVec3f []       bboxCenter       0 0 0     (-inf,inf)
  SFVec3f []       bboxSize         -1 -1 -1  [0,inf) or -1 -1 -1
  MFFloat [in,out] surfaceValues    []  (-inf, inf)
  SFFloat [in,out] contourStepSize  0  (-inf, inf)
  SFFloat [in,out] surfaceTolerance 0 [0, inf)
}

Defines a data set where each voxel is treated as a raw value, and from those values surfaces can be determined. A surface is defined to be the boundary between different regions of iso values, when that difference is greater than the surfaceTolerance amount. The gradient field may be used to provide explicit per-voxel gradient direction information for determining surface boundaries rather than having it implicitly calculated by the implementation.

This data representation has one of three possible modes of operation based on the values of the two fields surfaceValues and contourStepSize. If surfaceValues has a single value defined, then render the isosurface that corresponds to that value. If contourStepSize is non-zero, then also render all isosurfaces that are multiples of that step size from the initial surface value. For example, with a surface value of 0.25 and a step size of 0.1, then any additional isosurfaces at 0.05, 0.15, 0.35, 0.45, etc shall also be rendered. If contourStepSize is left at the default value of zero, only that single iso value is rendered as a surface.

If surfaceValues has more than a single value defined then the contourStepSize field is ignored and surfaces corresponding to those nominated values are rendered.

For each isosurface extracted from the data set, a separate render style may be assigned using the renderStyle node. The rendering styles are taken from the renderStyles field corresponding to the index of the surface value defined. In the case where automatic contours are being extracted using the step size, the explicit surface value shall use the first declared render style, and then render styles are assigned starting from the smallest iso value. In all cases, if there are insufficient render styles defined for the number of isosurfaces to be rendered, the last style shall be used for all surfaces that don't have an explicit style set

Ov is defined to be 1 for this volume data regardless of the number of components in the provided volume data texture

X.4.6 MIPVolumeStyle

MIPVolumeStyle : X3DVolumeRenderStyleNode {
  SFBool      [in,out] enabled             TRUE
  SFNode      [in,out] metadata            NULL [X3DMetadataObject]
  SFFloat     [in,out] intensityThreshold  0            [0,∞)
}

The Maximum Intensity Projection (MIP) volume style uses the voxel data directly to generate output colour based on the maximum and minimum values of voxel data along the viewing rays from the eye point. This rendering style also includes the option to use the extended form of Local Maximum Intensity Projection (LMIP, see [LMIP])

The output colour is determined by projecting rays into the voxel data from the viewer location and finding the maximum voxel value found along that ray. If the intensityThreshold value is non-zero then rendering will use the first maximum value encountered that exceeds the threshold rather than the maximum found along the entire ray. Figure X.1 illustrates the difference in rendered value between LMIP and MIP.

Illustration of LMIP versus MIP values

Figure X.1 — Illustration of values selected when using MIP or LMIP volume rendering styles.

Since the output of this node is intensity values, all colour components will have the same value. The intensity is derived from the average of all colour components of the voxel data (though typical usage will only use single component textures). The Alpha channel is passed through as-is from the underlying data. If there is no alpha channel, then assume a value of 1.

Cg = maxk=0..N(sk)
Og = Ov

where

X.4.7 OctTree

OctTree : X3DChildNode, X3DBoundedObject {
  MFNode  [in,out] highRes      NULL     [X3DChildNode]
  SFNode  [in,out] lowRes       NULL     [X3DGroupingNode, X3DShapeNode, X3DVolumeNode]
  SFNode  [in,out] metadata     NULL     [X3DMetadataObject]
  SFBool  [out]    lowResActive
  SFVec3f []       bboxCenter   0 0 0    (-∞,∞)
  SFVec3f []       bboxSize     -1 -1 -1 [0,∞) or -1 -1 -1
  SFVec3f []       center       0 0 0    (-∞,∞)
  SFFloat []       range        20       [0,∞)
}

Allows for the definition of multiresolution data sets that resolve using octants of volume. This node is not restricted to only having volume data as its children - all other geometry types are also valid structures.

The level of detail is switched depending upon whether the user is closer or further than range metres from the coordinate center.

The lowRes field holds the low resolution object instance to be rendered when the viewer is outside range metres. The highRes field is used to hold the geometry to be viewed when the inside range metres. An OctTree renders up to 8 children sub graphs as defined by the highRes field. If this field contains more than 8 children, only the first 8 shall be rendered. If less than 8 children are defined, all shall be rendered. It is up to the user to spatially located the geometry for each of the children subgraphs.

X.4.8 OpacityMapVolumeStyle

OpacityMapVolumeStyle : X3DComposableVolumeRenderStyleNode {
  SFBool [in,out] enabled          TRUE
  SFNode [in,out] metadata         NULL [X3DMetadataObject]
  SFNode [in,out] transferFunction NULL [X3DTextureNode]
}

Renders the volume using the opacity mapped to a transfer function texture. This is the default rendering style if none is defined for the volume data.

The transferFunction field holds a single texture representation in either two or three dimensions that map the voxel data values to a specific colour output. If no value is supplied for this field, the default implementation shall generate a 256x1 greyscale alpha-only image that blends from completely transparent at pixel 0 to fully opaque at pixel 255. The texture may be any number of dimensions and any number of components. The voxel values are used as a lookup coordinates into the transfer function texture, where the texel value represents the output colour.

Components are mapped from the voxel data to the transfer function in a component-wise fashion. The first component of the voxel data is an index into the first dimension of the transferFunction texture (S) and so on (see Table X.3). If there are more components defined in the voxel data than there dimensions in the transfer function, the extra components are ignored. If there are more dimensions in the transfer function texture than the voxel data, the extra dimensions in the transfer function are ignored (effectively treating the voxel component data as a value of zero for the extra dimension). This mapping the locates the texel value in the texture, which is then used as the output for this style. The colour value is treated like a normal texture with the colour mapping as defined in Table X.4

.

Table X.3 — Transfer function texture coordinate mapping

Voxel ComponentsTransfer Function Texture Coordinates
LuminanceS
Luminance AlphaS,T
RGBS,T,R
RGBAS,T,R,Q

Table X.4 — Transfer function texture type to output colour mapping

Texture ComponentsRedGreenBlueAlpha
Luminance (L)LLL1
Luminance Alpha (LA)LLLA
RGBRGB1
RGBARGBA

X.4.9 SegmentedVolumeData

SegmentedVolumeData : X3DVolumeNode {
  SFVec3f [in,out] dimensions          1 1 1     (0,∞)
  SFNode  [in,out] metadata            NULL      [X3DMetadataObject]
  MFNode  [in,out] renderStyle         []        [X3DVolumeRenderStyleNode]
  MFBool  [in,out] segmentEnabled      []
  SFNode  [in,out] segmentIdentifiers  NULL        [X3DTexture3DNode]
  SFNode  [in,out] voxels              NULL        [X3DTexture3DNode]
  SFVec3f []       bboxCenter          0 0 0     (-∞,∞)
  SFVec3f []       bboxSize            -1 -1 -1  [0,∞) or -1 -1 -1
}

Defines a segmented volume data set that allows for representation of different rendering styles for each segment identifier.

The renderStyle field optionally describes a particular rendering style to be used. If this field has a non-zero number of values, then the defined rendering style is to be applied to the object. If the object is segmented, then the index of the segment shall look up the rendering style at the given index in this array of values and apply that style to data described by that segment ID. If the field value is not specified by the user, the implementation shall use a OpacityMapVolumeStyle node with default values.

The voxels field holds a 3D texture with the data for each voxel. For each voxel there is a corresponding segment identifier supplied in the segmentIdentifiers field, which contains a single component texture. If the segmentIdentifiers texture is not identical in size to the main voxels, it shall be ignored. If it contains more than one colour component, only the red component of the colour shall be used to define the identifier.

The segmentEnabled field allows for controlling whether a segment should be rendered or not. The indices of this array corresponds to the segment ID. A value at index i of FALSE marks any data with the corresponding segment ID to be not rendered. If a segment ID is used that is greater than the length of the array, the value is assumed to be TRUE.

X.4.10 ShadedVolumeStyle

ShadedVolumeStyle : X3DVolumeRenderStyleNode {
  SFBool   [in,out] enabled        TRUE
  SFNode   [in,out] material       NULL  [X3DMaterialNode]
  SFNode   [in,out] metadata       NULL  [X3DMetadataObject]
  SFNode   [in,out] surfaceNormals NULL  [X3DTexture3DNode]
  SFBool   [in,out] lighting       FALSE
  SFBool   [in,out] shadows        FALSE
  SFString []       phaseFunction  "Henyey-Greenstein"
}

The shaded volume style applies traditional local illumination model that is used in polygonal rendering to volume rendering. In this style, the source voxel value ignored other than to determine if it is a surface that needs to be shaded or not and the normal at that surface. Typically this style is used combined with the ISOSurfaceVolumeData to extract surfaces from the data and render each surface with a different colour. Determination of whether the voxel should be shaded using this model is the responsibility of the volume data definition.

Once a voxel has been determined to be a rendered, a colour and opacity is determined based on whether a value has been specified for the material field. If a material value is provided, this voxel is considered to be lit using the lighting equations below. If no material node is provided, it is considered to be unlit and the colour of the voxel completely transparent.

When a material node is provided the voxel is lit using the the Blinn-Phong local Illumnation Model (which is similar to the model used for polygonal surfaces). The lighting equation is defined as:

Cg = IFrgb × (1 -f0)
        + f0 × (CE rgb + SUM( oni × attenuationi × spoti × ILrgb
                                 × (ambienti + diffusei + specular i)))

Og = Ov(1 - X3DMaterialNode transparency)

where:

attenuationi = 1 / max(a1 + a2 × dL + a3 × dL² , 1 )
ambienti = Iia × CDrgb × Ca

diffusei = Ii × CDrgb × ( N · L )
specular i = Ii × CSrgb × ( N · ((L + V) / |L + V|))shininess × 128

and:

· = modified vector dot product:
       if dot product < 0,then 0.0, otherwise, dot product
a1 , a2, a3 = light i attenuation
dV = distance from this voxel to viewer's position, in coordinate system of current fog node
dL = distance from light to voxel, in light's coordinate system
f0 = fog interpolant, see Table 17.5 for calculation
IFrgb = currently bound fog's color
I Lrgb = light i color
Ii = light i intensity
Iia = light i ambientIntensity
L = (PointLight/SpotLight) normalized vector from this voxel to light source i position
L = (DirectionalLight) -direction of light source i
N = normalized normal vector at this voxel (interpolated from vertex normals specified by the surfaceNormals field or automatically calculated.
Ca = X3DMaterialNode ambientIntensity
CDrgb = diffuse colour, from a node derived from X3DMaterialNode
CErgb = X3DMaterialNode emissiveColor
CSrgb = X3DMaterialNode specularColor
on i = 1, if light source i affects this voxel,
  0, if light source i does not affect this voxel. The following conditions indicate that light source i does not affect this voxel:
 
   a. if the voxel is farther away than radius for PointLight or SpotLight;
   b. if the volume is outside the enclosing X3DGroupingNode; and/or
   c. if the on field is FALSE.
   d. if the lighting field of this volume is FALSE.
shininess = X3DMaterialNode shininess
spotAngle = arccosine(-L · spotDiri)
spot BW = SpotLight i beamWidth
spot CO = SpotLight i cutOffAngle
spot i = spotlight factor, see Table 17.4 for calculation
spotDiri = normalized SpotLight i direction
SUM: sum over all light sources i
V = normalized vector from the voxel to viewer's position

The lighting field controls whether the rendering should calculate and apply shading effects to the visual output. When shading is applied, the value of the surfaceNormals field can be used to provide pre-generated surface normals for lighting calculations. If lighting is not enabled, then flat shading using the surface colour is to be used.

The surfaceNormals field contains a 3D texture with at least 3 component values. Each voxel in the texture represents the surface normal direction for the corresponding voxel in the base data source. This texture should be identical in dimensions to the source data. If not, the implementation may interpolate or average between adjacent voxels to determine the average normal at the voxel required. If this field is empty, the implementation shall automatically determine the surface normal using algorithmic means.

The shadows field controls whether the rendering should calculate and apply shadows to the visual output. A value of FALSE requires that no shadowing be applied. A value of TRUE requires that shadows be applied to the object. If the lighting field is set to FALSE, this field shall be ignore and no shadows generated. This field will also be ignored if the requested component level is less than 4.

The phaseFunction field is used to define the scattering model for use in an implementation using global illumnation. The name defines the model type, based on standard algorithms externally defined to this specification. The default implementation is the Henyey-Greenstein phase function defined in [HENYEY].

X.4.11 SilhouetteEnhancementVolumeStyle

SilhouetteEnhancementVolumeStyle : X3DComposableVolumeRenderStyleNode {
  SFBool  [in,out] enabled                    TRUE
  SFNode  [in,out] metadata                   NULL  [X3DMetadataObject]
  SFNode  [in,out] surfaceNormals             NULL  [X3DTexture3DNode]
  SFFloat [in,out] silhouetteBoundaryOpacity  0     [0,∞)
  SFFloat [in,out] silhouetteRetainedOpacity  1     [0,1]
  SFFloat [in,out] silhouetteSharpness        0.5   [0,∞)
}

Provides silhouette enhancement for the volume rendering style. Enhancement of the basic volume is provided by darkening voxels based on their orientation relative to the view direction. Perpendicular voxels are completely opaque while voxels parallel are completely transparent. A threshold can be set where the proportion of how close to perpendicular the direction needs to be before the values are made more opaque by changing the silhouetteFactor field value.

Og = Ov * (ksc + kss(1 - |n . V|) ^ kse)

where

The surfaceNormals field contains a 3D texture with at least 3 component values. Each voxel in the texture represents the surface normal direction for the corresponding voxel in the base data source. This texture should be identical in dimensions to the source data. If not, the implementation may interpolate or average between adjacent voxels to determine the average normal at the voxel required. If this field is empty, the implementation shall automatically determine the surface normal using algorithmic means.

X.4.11 StippleVolumeStyle

StippleVolumeStyle : X3DVolumeRenderStyleNode {
  SFFloat [in,out] distanceFactor             1     [0,∞)
  SFBool  [in,out] enabled                    TRUE
  SFFloat [in,out] interiorFactor             1     [0,∞)
  SFFloat [in,out] lightingFactor             1     [0,∞)
  SFNode  [in,out] metadata                   NULL  [X3DMetadataObject]
  SFFloat [in,out] gradientThreshold          0.4   [0,0 - π/2]
  SFFloat [in,out] gradientRetainedOpacity    1     [0,1]
  SFFloat [in,out] gradientBoundaryOpacity    0     [0,∞)
  SFFloat [in,out] gradientOpacityFactor      1     [0,∞)
  SFFloat [in,out] silhouetteRetainedOpacity  1     [0,1]
  SFFloat [in,out] silhouetteBoundaryOpacity  0     [0,∞)
  SFFloat [in,out] silhouetteOpacityFactor    1     [0,∞)
  SFFloat [in,out] resolutionFactor           1     [0,∞)
}

Renders the volume using stipple patterns making use of the Wang stipple patterns for 3D dimensional data sets. Stipple patterns are automatically generated by the browser internals based on a number of algorithmic hints. It is recommended the approach defined in [STIPPLE] is used.

The general approach of the rendering process is to render a set of points, whose density is defined by a number of factors - edge, boundary silhouette enhancements, lighting and other effects. The renderer determines a absolute maximum density of points in a voxel (Nmax and then evaluates every voxel to obtain the number of points (N) to be rendered. The distribution of points in the volume of space is an implementation- specific detail. The final calculation of N is determined by the follow set of equations:

The gradientThreshold field defines the minimum angle (in radians) away from the view direction vector that the surface normal needs to be before any boundary enhancement is applied.

N = Nmax * Tb * Ts * Td * Tl * Tr * Ti
Tb = Cv * (kgc + kgs * (|Δf|)^kge)
Ts = Cv * (ksc + kss * (1 - (|Δf . V)) ^ kse)
Td = 1 + (z / a) ^ kde
Tl = 1 - (Li . Δf) ^ kle
Tr = ((Dnear + di) / (Dnear + d0)) ^ kre
Ti = ||Δf|| ^ kie

where

X.4.12 ToneMappedVolumeStyle

ToneMappedVolumeStyle : X3DComposableVolumeRenderStyleNode {
  SFColorRGB  [in,out] coolColor      0 0 1  [0,1]
  SFBool      [in,out] enabled        TRUE
  SFNode      [in,out] metadata       NULL   [X3DMetadataObject]
  SFNode      [in,out] surfaceNormals NULL   [X3DTexture3DNode]
  SFColorRGB  [in,out] warmColor      1 1 0  [0,1]
}

Renders the volume using the Gooch shading model of two-toned warm/cool colouring. Two colours are defined, a warm colour and a cool colour and the renderer shades between them based on the orientation of the voxel relative to the user. This is not the same as the basic ISO surface shading and lighting. The following colour formula is used:

cc(i) = (1 + Li . n) * 0.5
Cg = Σ cci * warmColor + (1 - cci) * coolColor

The warmColor and coolColor fields define the two colours to be used at the limits of the spectrum. The warmColor field is used for surfaces facing towards the light, while the coolColor is used for surfaces facing away from the light direction.

The surfaceNormals field contains a 3D texture with at least 3 component values. Each voxel in the texture represents the surface normal direction for the corresponding voxel in the base data source. This texture should be identical in dimensions to the source data. If not, the implementation may interpolate or average between adjacent voxels to determine the average normal at the voxel required. If this field is empty, the implementation shall automatically determine the surface normal using algorithmic means.

The final output colour is determined by combining the interpolated colour value Cg with the opacity of the corresponding voxel. Colour components of the voxel are ignored.

X.4.13 VolumeData

VolumeData : X3DVolumeNode {
  SFVec3f [in,out] dimensions       1 1 1     [0,∞)
  SFNode  [in,out] metadata         NULL      [X3DMetadataObject]
  SFNode  [in,out] renderStyle      NULL      [X3DVolumeRenderStyleNode]
  MFNode  [in,out] voxels           []        [X3DTexture3DNode]
  SFVec3f []       bboxCenter       0 0 0     (-∞,∞)
  SFVec3f []       bboxSize         -1 -1 -1  [0,∞) or -1 -1 -1
}

Defines the volume information to be used on a simple non-segmented volumetric description that uses a single rendering style node for the complete volume.

The renderStyle field allows the user to specify a specific rendering technique to be used on this volumetric object. If the value not specified by the user, the implementation shall use a OpacityMapVolumeStyle node with default values.

The voxels field provides the raw voxel information to be used by the specific rendering styles. The value is any X3DTexture3DNode type and may have any number of colour components defined. The specific interpretation for the values at each voxel shall be defined by the value of the renderStyle field. If more than one node is defined for this field then each node after the first shall be treated as a mipmap level of monotonically decreasing size. Each level should be half the dimensions of the previous level

cube X.5 Support Levels

The Volume rendering component defines four levels of support as specified in Table X.5.

Table X.5 — Volume rendering component support levels

LevelPrequisitesNodes/FeaturesSupport
Level 1 Core 1
Grouping 1
Shape 1
Rendering 1
X3DComposableVolumeRenderStyleNode n/a
X3DVolumeRenderStyleNode n/a
X3DVolumeShapeNode n/a
OctTree All fields fully supported
OpacityMapVolumeStyle Only 2D texture transfer function needs to be supported. All other fields fully supported.
VolumeData All fields fully supported
Level 2 Core 1
Grouping 1
Shape 1
Rendering 1
BoundaryEnhancementVolumeStyle All fields fully supported
ComposedVolumeStyle ordered field is always treated as FALSE. All other fields fully supported
EdgeEnhancementVolumeStyle All fields fully supported
MIPVolumeStyle All fields fully supported
OpacityMapVolumeStyle All fields fully supported. Must support 3D transfer functions
SegmentedVolumeData All fields fully supported
SilhouetteEnhancementVolumeStyle All fields fully supported
ToneMappedVolumeStype All fields fully supported
Level 3 Core 1
Grouping 1
Shape 1
Rendering 1
Lighting 2
ISOSurfaceVolumeStyle All fields fully supported except Shadow and phaseFunction
CartoonVolumeStyle All fields fully supported
ComposedVolumeStyle All fields fully supported
StippleVolumeStype All fields fully supported
Level 4 Core 1
Grouping 1
Shape 1
Rendering 1
Lighting 2
ISOVolumeStyle All fields fully supported with at least the Henyey-Greenstein phase function.
--- X3D separator bar ---
[ Xj3D Homepage | Xj3D @ Web3d | Screenshots | Dev docs | Dev Releases | Contributors | Getting Started ]
Last updated: $Date: 2007-09-20 23:23:41 $