Internal: Component constructor
type of the component. Ex. "TransformComponent"
Owning entity. Entity only passed when wrapping existing entity.
The color render texture to output the rendered scene from the camera to.
Debug variable to squeeze the frustum to be smaller than required, as a help to debug culling.
The depth render texture to output the rendered scene from the camera to. Optional.
Rendering should use a depth buffer, either backed by a provided texture or a buffer created by the renderer.
Does the colorTexture expect values in sRGB color space.
Internal flag set if component is a Cogs Core component.
Find intersection of frustum edges and Z axis and back camera back, tries to minimize field-of-view of combined frustum.
Far-value to use when overrideNearAndFar is enabled.
Near-value to use when overrideNearAndFar is enabled.
If true, override near and far values embedded in the provided projection matrix with overrideNearValue and overrideFarValue.
One projection matrix per layer.
Reference camera to pull data from.
Number of multi-sampling samples, a value of 0 or 1 implies no multi-sampling.
One affine transform matrix per layer.
When updating the reference camera, also update near and far.
Update reference camera using the view projection and transform matrices.
When updating the reference camera, use texture dimension to set camera viewport resolution.
Gets the optional owning entity
Component name == type. For compatibility
Gets the type of the component. Ex. "TransformComponent", "SceneComponent" etc.
Static
createInternal: Creates a new component of the given type (name). Component is wrapped when adding to an entity.
Type of the component. Ex. "TransformComponent"
Created Cogs.js component.
Static
wrap
Multi-view: Render a set of related views into array texture layers. If the graphics device support multi-view rendering, a single draw sequence will render into all the layers. This avoids issuing all the draw commands for each eye in VR/AR rendering. If multi-view rendering is not available, the behaviour will be emulated behind the scene using multiple draw sequences. For this to make sense, the views must be related, that is, it is sensible to use identical draw sequences. The referenceCamera represents the commonality between the views and will specify everything except the actual transforms, i.e. culling, render layers etc. The colorTexture and depthTexture are the array textures that should be rendered into. The views are specified with the matrices projections and transforms, which should have one element per layer. If updateReferenceCamera is enabled, the referenceCamera position and field of view will be updated so that it will contain the per-layer frusta specified by projections and transform. The texture arrays should contain either a single Texture2DArray where the number of slices should match the number of views, or multiple Texture2D or RenderBuffer textures matching the number of views. The transformComponent of the entity anchors the view transform matrices into world space, that is, the transform matrix of the transform component is applied to the provided matrices. That allows you to position and orient the reference space for the matrices within the virtual world space.