Engine API Reference - v2.6.1
    Preparing search index...

    Class CameraComponent

    The CameraComponent enables an Entity to render the scene. A scene requires at least one enabled camera component to be rendered. The camera's view direction is along the negative z-axis of the owner entity.

    Note that multiple camera components can be enabled simultaneously (for split-screen or offscreen rendering, for example).

    You should never need to use the CameraComponent constructor directly. To add a CameraComponent to an Entity, use Entity#addComponent:

    const entity = new pc.Entity();
    entity.addComponent('camera', {
    nearClip: 1,
    farClip: 100,
    fov: 55
    });

    Once the CameraComponent is added to the entity, you can access it via the camera property:

    entity.camera.nearClip = 2; // Set the near clip of the camera

    console.log(entity.camera.nearClip); // Get the near clip of the camera

    Hierarchy (View Summary)

    Index

    Properties

    entity: Entity

    The Entity that this Component is attached to.

    The ComponentSystem used to create this Component.

    Accessors

    • get aspectRatio(): number

      Gets the aspect ratio (width divided by height) of the camera.

      Returns number

    • set aspectRatio(value: number): void

      Sets the aspect ratio (width divided by height) of the camera. If aspectRatioMode is ASPECT_AUTO, then this value will be automatically calculated every frame, and you can only read it. If it's ASPECT_MANUAL, you can set the value.

      Parameters

      • value: number

      Returns void

    • get cullFaces(): boolean

      Gets whether the camera will cull triangle faces.

      Returns boolean

    • set cullFaces(value: boolean): void

      Sets whether the camera will cull triangle faces. If true, the camera will take material.cull into account. Otherwise both front and back faces will be rendered. Defaults to true.

      Parameters

      • value: boolean

      Returns void

    • get disablePostEffectsLayer(): number

      Gets the layer id of the layer on which the post-processing of the camera stops being applied to.

      Returns number

    • set disablePostEffectsLayer(layer: number): void

      Sets the layer id of the layer on which the post-processing of the camera stops being applied to. Defaults to LAYERID_UI, which causes post-processing to not be applied to UI layer and any following layers for the camera. Set to undefined for post-processing to be applied to all layers of the camera.

      Parameters

      • layer: number

      Returns void

    • get flipFaces(): boolean

      Gets whether the camera will flip the face direction of triangles.

      Returns boolean

    • set flipFaces(value: boolean): void

      Sets whether the camera will flip the face direction of triangles. If set to true, the camera will invert front and back faces. Can be useful for reflection rendering. Defaults to false.

      Parameters

      • value: boolean

      Returns void

    • get frustumCulling(): boolean

      Gets whether frustum culling is enabled.

      Returns boolean

    • set frustumCulling(value: boolean): void

      Sets whether frustum culling is enabled. This controls the culling of mesh instances against the camera frustum, i.e. if objects outside of the camera's frustum should be omitted from rendering. If false, all mesh instances in the scene are rendered by the camera, regardless of visibility. Defaults to false.

      Parameters

      • value: boolean

      Returns void

    • get jitter(): number

      Gets the jitter intensity applied in the projection matrix.

      Returns number

    • set jitter(value: number): void

      Sets the jitter intensity applied in the projection matrix. Used for jittered sampling by TAA. A value of 1 represents a jitter in the range of [-1, 1] of a pixel. Smaller values result in a crisper yet more aliased outcome, whereas increased values produce a smoother but blurred result. Defaults to 0, representing no jitter.

      Parameters

      • value: number

      Returns void

    • get layers(): number[]

      Gets the array of layer IDs (Layer#id) to which this camera belongs.

      Returns number[]

    • set layers(newValue: number[]): void

      Sets the array of layer IDs (Layer#id) to which this camera should belong. Don't push, pop, splice or modify this array, if you want to change it, set a new one instead. Defaults to [LAYERID_WORLD, LAYERID_DEPTH, LAYERID_SKYBOX, LAYERID_UI, LAYERID_IMMEDIATE].

      Parameters

      • newValue: number[]

      Returns void

    Methods

    • Attempt to end XR session of this camera.

      Parameters

      • Optionalcallback: XrErrorCallback

        Optional callback function called once session is ended. The callback has one argument Error - it is null if successfully ended XR session.

      Returns void

      // On an entity with a camera component
      this.entity.camera.endXr(function (err) {
      // not anymore in XR
      });
    • Fire an event, all additional arguments are passed on to the event listener.

      Parameters

      • name: string

        Name of event to fire.

      • Optionalarg1: any

        First argument that is passed to the event handler.

      • Optionalarg2: any

        Second argument that is passed to the event handler.

      • Optionalarg3: any

        Third argument that is passed to the event handler.

      • Optionalarg4: any

        Fourth argument that is passed to the event handler.

      • Optionalarg5: any

        Fifth argument that is passed to the event handler.

      • Optionalarg6: any

        Sixth argument that is passed to the event handler.

      • Optionalarg7: any

        Seventh argument that is passed to the event handler.

      • Optionalarg8: any

        Eighth argument that is passed to the event handler.

      Returns EventHandler

      Self for chaining.

      obj.fire('test', 'This is the message');
      
    • Test if there are any handlers bound to an event name.

      Parameters

      • name: string

        The name of the event to test.

      Returns boolean

      True if the object has handlers bound to the specified event name.

      obj.on('test', () => {}); // bind an event to 'test'
      obj.hasEvent('test'); // returns true
      obj.hasEvent('hello'); // returns false
    • Detach an event handler from an event. If callback is not provided then all callbacks are unbound from the event, if scope is not provided then all events with the callback will be unbound.

      Parameters

      • Optionalname: string

        Name of the event to unbind.

      • Optionalcallback: HandleEventCallback

        Function to be unbound.

      • Optionalscope: any

        Scope that was used as the this when the event is fired.

      Returns EventHandler

      Self for chaining.

      const handler = () => {};
      obj.on('test', handler);

      obj.off(); // Removes all events
      obj.off('test'); // Removes all events called 'test'
      obj.off('test', handler); // Removes all handler functions, called 'test'
      obj.off('test', handler, this); // Removes all handler functions, called 'test' with scope this
    • Attach an event handler to an event.

      Parameters

      • name: string

        Name of the event to bind the callback to.

      • callback: HandleEventCallback

        Function that is called when event is fired. Note the callback is limited to 8 arguments.

      • Optionalscope: any = ...

        Object to use as 'this' when the event is fired, defaults to current this.

      Returns EventHandle

      Can be used for removing event in the future.

      obj.on('test', (a, b) => {
      console.log(a + b);
      });
      obj.fire('test', 1, 2); // prints 3 to the console
      const evt = obj.on('test', (a, b) => {
      console.log(a + b);
      });
      // some time later
      evt.off();
    • Attach an event handler to an event. This handler will be removed after being fired once.

      Parameters

      • name: string

        Name of the event to bind the callback to.

      • callback: HandleEventCallback

        Function that is called when event is fired. Note the callback is limited to 8 arguments.

      • Optionalscope: any = ...

        Object to use as 'this' when the event is fired, defaults to current this.

      Returns EventHandle

      • can be used for removing event in the future.
      obj.once('test', (a, b) => {
      console.log(a + b);
      });
      obj.fire('test', 1, 2); // prints 3 to the console
      obj.fire('test', 1, 2); // not going to get handled
    • Request the scene to generate a texture containing the scene color map. Note that this call is accumulative, and for each enable request, a disable request need to be called. Note that this setting is ignored when the CameraComponent#renderPasses is used.

      Parameters

      • enabled: boolean

        True to request the generation, false to disable it.

      Returns void

    • Request the scene to generate a texture containing the scene depth map. Note that this call is accumulative, and for each enable request, a disable request need to be called. Note that this setting is ignored when the CameraComponent#renderPasses is used.

      Parameters

      • enabled: boolean

        True to request the generation, false to disable it.

      Returns void

    • Convert a point from 2D screen space to 3D world space.

      Parameters

      • screenx: number

        X coordinate on PlayCanvas' canvas element. Should be in the range 0 to canvas.offsetWidth of the application's canvas element.

      • screeny: number

        Y coordinate on PlayCanvas' canvas element. Should be in the range 0 to canvas.offsetHeight of the application's canvas element.

      • cameraz: number

        The distance from the camera in world space to create the new point.

      • OptionalworldCoord: Vec3

        3D vector to receive world coordinate result.

      Returns Vec3

      The world space coordinate.

      // Get the start and end points of a 3D ray fired from a screen click position
      const start = entity.camera.screenToWorld(clickX, clickY, entity.camera.nearClip);
      const end = entity.camera.screenToWorld(clickX, clickY, entity.camera.farClip);

      // Use the ray coordinates to perform a raycast
      app.systems.rigidbody.raycastFirst(start, end, function (result) {
      console.log("Entity " + result.entity.name + " was selected");
      });
    • Sets the name of the shader pass the camera will use when rendering.

      In addition to existing names (see the parameter description), a new name can be specified, which creates a new shader pass with the given name. The name provided can only use alphanumeric characters and underscores. When a shader is compiled for the new pass, a define is added to the shader. For example, if the name is 'custom_rendering', the define 'CUSTOM_RENDERING_PASS' is added to the shader, allowing the shader code to conditionally execute code only when that shader pass is active.

      Another instance where this approach may prove useful is when a camera needs to render a more cost-effective version of shaders, such as when creating a reflection texture. To accomplish this, a callback on the material that triggers during shader compilation can be used. This callback can modify the shader generation options specifically for this shader pass.

      const shaderPassId = camera.setShaderPass('custom_rendering');

      material.onUpdateShader = function (options) {
      if (options.pass === shaderPassId) {
      options.litOptions.normalMapEnabled = false;
      options.litOptions.useSpecular = false;
      }
      return options;
      };

      Returns number

      The id of the shader pass.

    • Attempt to start XR session with this camera.

      Parameters

      • type: string

        The type of session. Can be one of the following:

        • XRTYPE_INLINE: Inline - always available type of session. It has limited feature availability and is rendered into HTML element.
        • XRTYPE_VR: Immersive VR - session that provides exclusive access to the VR device with the best available tracking features.
        • XRTYPE_AR: Immersive AR - session that provides exclusive access to the VR/AR device that is intended to be blended with the real-world environment.
      • spaceType: string

        Reference space type. Can be one of the following:

        • XRSPACE_VIEWER: Viewer - always supported space with some basic tracking capabilities.
        • XRSPACE_LOCAL: Local - represents a tracking space with a native origin near the viewer at the time of creation. It is meant for seated or basic local XR sessions.
        • XRSPACE_LOCALFLOOR: Local Floor - represents a tracking space with a native origin at the floor in a safe position for the user to stand. The y-axis equals 0 at floor level. Floor level value might be estimated by the underlying platform. It is meant for seated or basic local XR sessions.
        • XRSPACE_BOUNDEDFLOOR: Bounded Floor - represents a tracking space with its native origin at the floor, where the user is expected to move within a pre-established boundary.
        • XRSPACE_UNBOUNDED: Unbounded - represents a tracking space where the user is expected to move freely around their environment, potentially long distances from their starting point.
      • Optionaloptions: {
            anchors?: boolean;
            callback?: XrErrorCallback;
            depthSensing?: { dataFormatPreference?: string; usagePreference?: string };
            imageTracking?: boolean;
            optionalFeatures?: string[];
            planeDetection?: boolean;
        }

        Object with options for XR session initialization.

        • Optionalanchors?: boolean

          Optional boolean to attempt to enable XrAnchors.

        • Optionalcallback?: XrErrorCallback

          Optional callback function called once the session is started. The callback has one argument Error - it is null if the XR session started successfully.

        • OptionaldepthSensing?: { dataFormatPreference?: string; usagePreference?: string }

          Optional object with parameters to attempt to enable depth sensing.

          • OptionaldataFormatPreference?: string

            Optional data format preference for depth sensing. Can be 'luminance-alpha' or 'float32' (XRDEPTHSENSINGFORMAT_*), defaults to 'luminance-alpha'. Most preferred and supported will be chosen by the underlying depth sensing system.

          • OptionalusagePreference?: string

            Optional usage preference for depth sensing, can be 'cpu-optimized' or 'gpu-optimized' (XRDEPTHSENSINGUSAGE_*), defaults to 'cpu-optimized'. Most preferred and supported will be chosen by the underlying depth sensing system.

        • OptionalimageTracking?: boolean

          Set to true to attempt to enable XrImageTracking.

        • OptionaloptionalFeatures?: string[]

          Optional features for XRSession start. It is used for getting access to additional WebXR spec extensions.

        • OptionalplaneDetection?: boolean

          Set to true to attempt to enable XrPlaneDetection.

      Returns void

      // On an entity with a camera component
      this.entity.camera.startXr(pc.XRTYPE_VR, pc.XRSPACE_LOCAL, {
      callback: function (err) {
      if (err) {
      // failed to start XR session
      } else {
      // in XR
      }
      }
      });