The Camera Component enables an Entity to render the scene. A scene requires at least one enabled camera component to be rendered. Note that multiple camera components can be enabled simultaneously (for split-screen or offscreen rendering, for example).

// Add a pc.CameraComponent to an entity
const entity = new pc.Entity();
entity.addComponent('camera', {
nearClip: 1,
farClip: 100,
fov: 55
});

// Get the pc.CameraComponent on an entity
const cameraComponent = entity.camera;

// Update a property on a camera component
entity.camera.nearClip = 2;

Hierarchy (View Summary)

Constructors

Properties

entity: Entity

The Entity that this Component is attached to.

onPostRender: Function = null

Custom function that is called after the camera renders the scene.

onPreRender: Function = null

Custom function that is called before the camera renders the scene.

The ComponentSystem used to create this Component.

Accessors

Methods

  • Fire an event, all additional arguments are passed on to the event listener.

    Parameters

    • name: string

      Name of event to fire.

    • Optionalarg1: any

      First argument that is passed to the event handler.

    • Optionalarg2: any

      Second argument that is passed to the event handler.

    • Optionalarg3: any

      Third argument that is passed to the event handler.

    • Optionalarg4: any

      Fourth argument that is passed to the event handler.

    • Optionalarg5: any

      Fifth argument that is passed to the event handler.

    • Optionalarg6: any

      Sixth argument that is passed to the event handler.

    • Optionalarg7: any

      Seventh argument that is passed to the event handler.

    • Optionalarg8: any

      Eighth argument that is passed to the event handler.

    Returns EventHandler

    Self for chaining.

    obj.fire('test', 'This is the message');
    
  • Test if there are any handlers bound to an event name.

    Parameters

    • name: string

      The name of the event to test.

    Returns boolean

    True if the object has handlers bound to the specified event name.

    obj.on('test', function () { }); // bind an event to 'test'
    obj.hasEvent('test'); // returns true
    obj.hasEvent('hello'); // returns false
  • Detach an event handler from an event. If callback is not provided then all callbacks are unbound from the event, if scope is not provided then all events with the callback will be unbound.

    Parameters

    • Optionalname: string

      Name of the event to unbind.

    • Optionalcallback: HandleEventCallback

      Function to be unbound.

    • Optionalscope: object

      Scope that was used as the this when the event is fired.

    Returns EventHandler

    Self for chaining.

    const handler = function () {
    };
    obj.on('test', handler);

    obj.off(); // Removes all events
    obj.off('test'); // Removes all events called 'test'
    obj.off('test', handler); // Removes all handler functions, called 'test'
    obj.off('test', handler, this); // Removes all handler functions, called 'test' with scope this
  • Attach an event handler to an event.

    Parameters

    • name: string

      Name of the event to bind the callback to.

    • callback: HandleEventCallback

      Function that is called when event is fired. Note the callback is limited to 8 arguments.

    • Optionalscope: object = ...

      Object to use as 'this' when the event is fired, defaults to current this.

    Returns EventHandle

    Can be used for removing event in the future.

    obj.on('test', function (a, b) {
    console.log(a + b);
    });
    obj.fire('test', 1, 2); // prints 3 to the console
    const evt = obj.on('test', function (a, b) {
    console.log(a + b);
    });
    // some time later
    evt.off();
  • Attach an event handler to an event. This handler will be removed after being fired once.

    Parameters

    • name: string

      Name of the event to bind the callback to.

    • callback: HandleEventCallback

      Function that is called when event is fired. Note the callback is limited to 8 arguments.

    • Optionalscope: object = ...

      Object to use as 'this' when the event is fired, defaults to current this.

    Returns EventHandle

    • can be used for removing event in the future.
    obj.once('test', function (a, b) {
    console.log(a + b);
    });
    obj.fire('test', 1, 2); // prints 3 to the console
    obj.fire('test', 1, 2); // not going to get handled
  • Request the scene to generate a texture containing the scene color map. Note that this call is accumulative, and for each enable request, a disable request need to be called.

    Parameters

    • enabled: boolean

      True to request the generation, false to disable it.

    Returns void

  • Request the scene to generate a texture containing the scene depth map. Note that this call is accumulative, and for each enable request, a disable request need to be called.

    Parameters

    • enabled: boolean

      True to request the generation, false to disable it.

    Returns void

  • Convert a point from 2D screen space to 3D world space.

    Parameters

    • screenx: number

      X coordinate on PlayCanvas' canvas element. Should be in the range 0 to canvas.offsetWidth of the application's canvas element.

    • screeny: number

      Y coordinate on PlayCanvas' canvas element. Should be in the range 0 to canvas.offsetHeight of the application's canvas element.

    • cameraz: number

      The distance from the camera in world space to create the new point.

    • OptionalworldCoord: Vec3

      3D vector to receive world coordinate result.

    Returns Vec3

    The world space coordinate.

    // Get the start and end points of a 3D ray fired from a screen click position
    const start = entity.camera.screenToWorld(clickX, clickY, entity.camera.nearClip);
    const end = entity.camera.screenToWorld(clickX, clickY, entity.camera.farClip);

    // Use the ray coordinates to perform a raycast
    app.systems.rigidbody.raycastFirst(start, end, function (result) {
    console.log("Entity " + result.entity.name + " was selected");
    });
  • Sets the name of the shader pass the camera will use when rendering.

    In addition to existing names (see the parameter description), a new name can be specified, which creates a new shader pass with the given name. The name provided can only use alphanumeric characters and underscores. When a shader is compiled for the new pass, a define is added to the shader. For example, if the name is 'custom_rendering', the define 'CUSTOM_RENDERING_PASS' is added to the shader, allowing the shader code to conditionally execute code only when that shader pass is active.

    Another instance where this approach may prove useful is when a camera needs to render a more cost-effective version of shaders, such as when creating a reflection texture. To accomplish this, a callback on the material that triggers during shader compilation can be used. This callback can modify the shader generation options specifically for this shader pass.

    const shaderPassId = camera.setShaderPass('custom_rendering');

    material.onUpdateShader = function (options) {
    if (options.pass === shaderPassId) {
    options.litOptions.normalMapEnabled = false;
    options.litOptions.useSpecular = false;
    }
    return options;
    };

    Returns number

    The id of the shader pass.

  • Attempt to start XR session with this camera.

    Parameters

    • type: string

      The type of session. Can be one of the following:

      • XRTYPE_INLINE: Inline - always available type of session. It has limited feature availability and is rendered into HTML element.
      • XRTYPE_VR: Immersive VR - session that provides exclusive access to the VR device with the best available tracking features.
      • XRTYPE_AR: Immersive AR - session that provides exclusive access to the VR/AR device that is intended to be blended with the real-world environment.
    • spaceType: string

      Reference space type. Can be one of the following:

      • XRSPACE_VIEWER: Viewer - always supported space with some basic tracking capabilities.
      • XRSPACE_LOCAL: Local - represents a tracking space with a native origin near the viewer at the time of creation. It is meant for seated or basic local XR sessions.
      • XRSPACE_LOCALFLOOR: Local Floor - represents a tracking space with a native origin at the floor in a safe position for the user to stand. The y-axis equals 0 at floor level. Floor level value might be estimated by the underlying platform. It is meant for seated or basic local XR sessions.
      • XRSPACE_BOUNDEDFLOOR: Bounded Floor - represents a tracking space with its native origin at the floor, where the user is expected to move within a pre-established boundary.
      • XRSPACE_UNBOUNDED: Unbounded - represents a tracking space where the user is expected to move freely around their environment, potentially long distances from their starting point.
    • Optionaloptions: {
          anchors?: boolean;
          callback?: XrErrorCallback;
          depthSensing?: { dataFormatPreference?: string; usagePreference?: string };
          imageTracking?: boolean;
          optionalFeatures?: string[];
          planeDetection?: boolean;
      }

      Object with options for XR session initialization.

      • Optionalanchors?: boolean

        Optional boolean to attempt to enable XrAnchors.

      • Optionalcallback?: XrErrorCallback

        Optional callback function called once the session is started. The callback has one argument Error - it is null if the XR session started successfully.

      • OptionaldepthSensing?: { dataFormatPreference?: string; usagePreference?: string }

        Optional object with depth sensing parameters to attempt to enable XrDepthSensing.

        • OptionaldataFormatPreference?: string

          Optional data format preference for depth sensing. Can be 'luminance-alpha' or 'float32' (XRDEPTHSENSINGFORMAT_*), defaults to 'luminance-alpha'. Most preferred and supported will be chosen by the underlying depth sensing system.

        • OptionalusagePreference?: string

          Optional usage preference for depth sensing, can be 'cpu-optimized' or 'gpu-optimized' (XRDEPTHSENSINGUSAGE_*), defaults to 'cpu-optimized'. Most preferred and supported will be chosen by the underlying depth sensing system.

      • OptionalimageTracking?: boolean

        Set to true to attempt to enable XrImageTracking.

      • OptionaloptionalFeatures?: string[]

        Optional features for XRSession start. It is used for getting access to additional WebXR spec extensions.

      • OptionalplaneDetection?: boolean

        Set to true to attempt to enable XrPlaneDetection.

    Returns void

    // On an entity with a camera component
    this.entity.camera.startXr(pc.XRTYPE_VR, pc.XRSPACE_LOCAL, {
    callback: function (err) {
    if (err) {
    // failed to start XR session
    } else {
    // in XR
    }
    }
    });