<onWebFocus />

Knowledge is only real when shared.

Building a 3D Website with React

December 2022

How to create an interactive website with 3D elements in React.

Creating websites is not just about arranging text and rectangles. In this post, we'll explore how to use @react-three/fiber and a web-based tool called Spline to create a 3D website. The site will blend classic UI elements with 3D parts rendered in several Canvas elements, taking advantage of the browser's built-in WebGL functionality. While 3D on websites has yet to offer practical use cases, it can enhance any website as a decorative element and help it stand out. In addition, games can be developed for the browser, allowing users to play them easily and safely within the browser itself.

Much of traditional web development, especially in the React ecosystem, can be applied to React Three Fiber. However, the learning curve can be steep, particularly when it comes to specific requirements that aren't easily inferred from the examples available. Getting started is relatively straightforward, but the subject matter is vast, and there are many intricacies to consider, much like React Native.

In this post, readers will learn how to create a website similar to the landing page showcasing a React Native plugin described in a previous post about numic.

3D Modeling Tools: Spline

Most web designers hold free web-based design tools, such as Figma, in high esteem. However, the 3D space has been much less accessible, as game designers typically use heavy desktop software like Blender or Cinema 4D. These programs offer a wide range of functionalities, but can be difficult to learn.

React Three Fiber has been available for a few years now, but Spline's beta was only released earlier this year. Since then, it has become the ideal tool for efficiently designing 3D objects and scenes to be used with React Three Fiber. Spline is relatively easy to learn for those with experience in 2D design tools, and they also offer a dedicated YouTube channel with numerous quick tutorials for specific features.

This blog's icon has been transformed into an embedded 3D model using Spline, which allows you to click and drag (or pinch) to move it around.

Exporting Assets for @react-three/fiber in Spline

Spline includes a code exporter designed specifically for React Three Fiber, although currently it can only export the entire scene. The tool generates JSX code that can be readily used to render any desired part of the scene and to make dynamic changes to each part individually in the React source code. To use this approach, you need to utilize the custom @splinetool/r3f-spline loader and a .splinecode file. While this approach works well, you can also export the scene as a GLTF file and then split it up into JSX using the online GLTF conversion tool created by Poimandres, the developer collective behind all of these React Three Fiber efforts. When exported, each layer in the tool will be a separate geometry, which can lead to numerous separate parts being generated unnecessarily. Currently, there is no option to merge different layers in a way that would also merge them in the export. However, a Flatten option (similar to Figma) might be added later, which would be quite helpful.

Womp

Womp is a new browser-based design tool that is quite similar to Spline. However, it's still in the Alpha stage of development, and its main focus is on creating individual 3D objects and materials. Unlike Spline, Womp doesn't have any specific support for use with React Three Fiber. Nevertheless, you can still export models as .obj files and load them through the regular useLoader method. Womp also has a dedicated 3D YouTube channel, which offers a wide range of easy-to-follow tutorials, similar to those available for Spline.

React Three Fiber

three is the most widely used 3D library for the web. To use it with React and JSX, you can use @react-three/fiber, which wraps Three.js and provides the entire interface, along with an ecosystem of plugins on top. While having knowledge of Three.js can be beneficial, getting started directly with the React version works just as well.

Mathematical, Geometrical and Physics Fundamentals

Rendering 3D graphics generally requires a good understanding of physics, geometry, and mathematics. The most critical aspect is likely to be the knowledge of positioning using x, y, z coordinates, as well as rotations in these dimensions. However, most complex algorithms have already been abstracted in a way that makes it possible to use the interface without significant knowledge of the underlying principles. Even though many relevant concepts may have been learned in school, it's easy to forget them. In this case, basic geometric functions like Sine, Cosine, and Tangent can be looked up to refresh your knowledge and easily do things like move objects around on a circle.

Learning React Three Fiber Along With Three.js

While it's true that you would ideally need a comprehensive knowledge of Three.js to work with React Three Fiber, many people nowadays begin with React before learning HTML in detail. Similarly, this may be the case for React Three Fiber. React Three Fiber is written in JSX, and it differs significantly from regular Three.js. Therefore, it's an excellent way to start using the React version directly.

Both the Three.js documentation and the React Three Fiber documentation lack a comprehensive guide. However, they do provide a quick getting started guide, which is an excellent starting point for beginners. Besides that, the documentations primarily describe all of the available interfaces, which may not be helpful when just getting started.

The React Three Fiber documentation is particularly useful for its comprehensive collection of examples. These examples are regularly updated and serve as excellent resources for learning how to build things using React Three Fiber. If you already have a project in mind, you can start with a relevant example and gradually modify it to match your desired outcome. While this learning by doing approach can be effective, it can also be frustrating when faced with obstacles.

Three.js Journey offers a guided approach to learning Three.js, but it comes at a cost. However, if you're specifically interested in using React Three Fiber to create a Configurator, there is now an Udemy course available.

Ecosystem

React Three Fiber has a robust ecosystem of abstractions and helpers that can save you from having to manually learn complex mathematics and physics. The most important tool in this arsenal is @react-three/drei. This comprehensive collection of helpers offers a variety of functionalities and can be a great asset when working with React Three Fiber. Before diving into a specific implementation, be sure to browse the Drei documentation on GitHub, which includes clear names, helpful images, and useful code snippets to help you understand the functionality and how to use it. There are also other helpers like @react-three/cannon and @react-three/rapier, which can add physics to any scene. In cases where the design tools don't offer sufficient boolean operations, @react-three/csg comes in handy. Additionally, leva is a simple React hook that can be paired with parts of a scene to dynamically try out variables in the browser without the need to reload, which can be useful for experimenting with changes.

An experimental package to keep an eye on is @react-three/editor, which allows changes made to the scene in the browser to be written back into the code. This is similar to the functionality offered by design tools, but without the extra step of exporting the changes.

Basic Example

import React, { useEffect, useRef } from 'react'
import { Vector3 } from 'three'
import { Canvas, useFrame } from '@react-three/fiber'
import { OrthographicCamera, useGLTF } from '@react-three/drei'

function Scene() {
  const mesh = useRef()
  const { nodes: { Shape_0: shape } } = useGLTF('https://onwebfocus.com/button.gltf')

  // This rotates the icon.
  useFrame((_state, delta) => (mesh.current.rotation.y += delta))

  // This ensures the rotation happens around the center of the icon.
  useEffect(() => {
    mesh.current.geometry.computeBoundingBox()
    const boundingBox = mesh.current.geometry.boundingBox
    const center = new Vector3()
    boundingBox.getCenter(center)
    mesh.current.geometry.translate(-center.x, -center.y, -center.z)
  })

  return (
    <>
      <mesh
        ref={mesh}
        geometry={shape.geometry}
        position={[0, 0, 0]}
        rotation={[0, 0, 0]}
        scale={[-0.1, 0.1, 0.1]}
      >
        <meshStandardMaterial color="black" />
      </mesh>
      <OrthographicCamera
        makeDefault
        far={100000}
        near={-100000}
        position={[0, 0, 0]}
        rotation={[0, 0, 0]}
      />
    </>
  )
}

export default () => {
  return (
    <Canvas style={{ height: '100%', width: '100%' }} dpr={1}>
      <Scene />
    </Canvas>
  )
}

In this example, a GLTF model is loaded using the useGLTF hook. The mesh object in the markup will be rotated on the y-Axis with each new frame. To ensure that the rotation happens around the center of the object, the code inside useEffect is initially run. This is important because the center of the object at [0, 0, 0] may not match the expected center needed for a proper rotation. To avoid this issue, it's recommended to create the object around the center in the design tool.

Camera Positioning

It's important to have a basic understanding of how cameras work and how to position them in your 3D scene. One of the most commonly used cameras is the PerspectiveCamera, which simulates how our eyes perceive objects, making things further away appear smaller. Another type of camera, the OrthographicCamera, allows you to define a specific area of the scene to display without distance affecting the size of objects. This might seem odd, but sometimes using an orthographic camera can lead to better results in your 3D rendering.

import { PerspectiveCamera, OrthographicCamera } from '@react-three/drei'
        
export default () => (
  <Canvas>
    <PerspectiveCamera
      far={8000}
      near={5}
      fov={45}
      position={[0, 100, 1000]}
      makeDefault={true}
    />
    <OrthographicCamera
      far={100000}
      near={-100000}
      position={[0, 0, 0]}
      rotation={[0, 0, 0]}
    />
  </Canvas>
)

Lights and Shadows

To light up objects in a scene, there are several options available. One way is to use ambientLight, which illuminates the entire scene from all directions. Another option is to use pointLight, which behaves like a light bulb and shines light in all directions from a designated center. However, its intensity will decrease with distance. On the other hand, directionalLight is like a far-away torch that doesn't scatter or weaken with distance. For objects to cast or receive shadows provided by lights, the castShadow and receiveShadow properties must be set on the mesh.

export default () => (
  <Canvas>
    <ambientLight intensity={0.5} />
    <pointLight position={[10, 10, 10]} />
    <directionalLight color="white" position={[0, 0, 5]} />
    <mesh receiveShadow castShadow />
  </Canvas>
)

The Rendering Loop

Graphics-based frameworks like Three.js operate differently than regular React. Instead of relying on React's rendering lifecycle, Three.js uses a custom function called the rendering loop, which is called 60 times per second to add movement and interactivity to the scene. The regular React rendering tree isn't performant enough to be run this frequently, so the rendering loop operates independently. While React rendering is ideal for adding or removing elements based on user interactions, the rendering loop is best used to make small adjustments to existing elements. This can be achieved with the useFrame hook, which provides access to the rendering loop on a per-component basis. The hook receives a delta, a small number that can vary based on performance. This delta can be added to position or rotation values with each frame, allowing for smooth movement in the scene.

import { Vector3 } from 'three'
import { useFrame } from '@react-three/fiber'

useFrame((_state, delta) => {
    camera.current.position.lerp(new Vector3(10, 10, 10), 0.01)
    mesh.current.rotation.y += delta
})

The useFrame function can be utilized in any part of the component tree, and isn't restricted to just one function. If the browser's performance is sufficient, all the callbacks will be called 60 times per second, and the scene will adjust accordingly. However, if the performance is lower, there will be fewer calls per second, resulting in a bigger delta value and lower framerate. This decrease in performance may sometimes be noticeable to the user.

Physics

Adding physics to a scene can enhance its realism and create a sense of movement. By using @react-three/rapier, objects can interact with each other and behave as if they are affected by gravity.

import React, { useRef } from 'react'
import { Vector3 } from 'three'
import { Canvas, useFrame } from '@react-three/fiber'
import { OrthographicCamera, useGLTF, Box } from '@react-three/drei'
import { Physics, RigidBody, Debug } from '@react-three/rapier'

const Floor = () => (
  <RigidBody type="fixed" colliders="cuboid">
    <Box
      position={[0, -50, 0]}
      scale={[500, 10, 500]}
      rotation={[0, 0, 0]}
      receiveShadow
    >
      <shadowMaterial opacity={0.2} />
      <meshStandardMaterial color="purple" />
    </Box>
  </RigidBody>
)

function Scene() {
  const mesh = useRef()

  return (
    <>
      <Physics gravity={[0, -100, 0]} colliders="hull">
        <RigidBody restitution={2}>
          <mesh ref={mesh} position={[0, 0, 0]} rotation={[0, 0, 0]} scale={3}>
            <boxGeometry args={[10, 10, 10]} />
            <meshStandardMaterial color="red" />
          </mesh>
        </RigidBody>
        <RigidBody restitution={2}>
          <mesh ref={mesh} position={[-20, 15, 0]} rotation={[0, 0, 0]} scale={3}>
            <boxGeometry args={[10, 10, 10]} />
            <meshStandardMaterial color="green" />
          </mesh>
        </RigidBody>
        <RigidBody restitution={2}>
          <mesh ref={mesh} position={[30, 0, 0]} rotation={[0, 0, 0]} scale={3}>
            <boxGeometry args={[20, 20, 20]} />
            <meshStandardMaterial color="blue" />
          </mesh>
        </RigidBody>
        <Floor />
      </Physics>
      <ambientLight />
      <pointLight position={[10, 10, 10]} />
      <OrthographicCamera
        name="1"
        makeDefault={true}
        zoom={1}
        far={100000}
        near={-100000}
        position={[0, 0, 0]}
        rotation={[0, 0, 0]}
        scale={1}
      />
    </>
  )
}

export default () => {
  return (
    <Canvas style={{ height: '100%', width: '100%' }} dpr={1}>
      <Scene />
    </Canvas>
  )
}

To enable interactions between components and the environment, the components need to be wrapped in the Physics component. This component requires a gravity vector to be defined, which determines the center of gravity. For a gravity vector similar to that of Earth, it can be set to [0, -100, 0] to simulate gravity towards the ground. A Floorcan be added to the scene by wrapping a Box in a RigidBody to stop the objects from falling infinitely. If you want to visualize the collision boxes generated by RigidBody and other collision wrappers, you can use the Debug component. Simply add it inside the Physics component.

Performance

Rendering 3D in a browser requires significantly more performance than doing the same natively, such as when playing a game. Even small React Three Fiber scenes with ongoing animations can cause the fans to spin due to the high demand for processing power.

If you're developing on older hardware without a dedicated GPU, it's a good idea to turn off animations for elements you're not currently working on. Keep in mind that even simple React Three Fiber scenes with ongoing animations can cause fans to start spinning loudly, which can be distracting for users. If 3D scenes are used only for decorative purposes, they should not consume too much rendering power or be disabled on slower devices altogether. To ensure big scenes run as smoothly as possible, check out the Scaling Performance chapter in the documentation, which introduces various techniques.

const Scene = () => <Canvas dpr={1}>{...}</Canvas>

One simple way to improve performance is to adjust the dpr value on the Canvas component. A lower dpr, such as 1 or 0.5, will look less crisp but use less graphics performance. By default, the values are set at 1.5 or 2, which looks better but requires more graphics power. You can also use the PerformanceMonitor component to automatically adjust the dpr based on the current framerate.

function Scene() {
  const [dpr, setDpr] = useState(1.5)
  const regress = useThree((state) => state.performance)

  return (
    <Canvas dpr={dpr}>
      <PerformanceMonitor
        onIncline={() => setDpr(2)}
        onDecline={() => setDpr(1)}
        onChange={({ fps }) => console.log(fps)}
      />
      {...}
    </Canvas>
  )
}

Responsive Scene Design

When creating 3D scenes, the canvas in which they are rendered is automatically scaled to match the provided size, making it responsive. However, it's important to ensure that objects in the scene remain recognizable even on smaller devices. The aspect ratio available to the canvas can vary greatly depending on the device used. For example, a scene that appears in portrait mode on a mobile device may be shown in landscape on desktops. To ensure that the scene is displayed correctly on different devices, it's important to consider the aspect ratio and make adjustments as needed. One way to do this is by placing the focus in the middle of the scene and expecting horizontal or vertical cutoffs to be made. In some cases, it may also be necessary to position the camera differently depending on the available screen size.

The size of objects on the Canvas is determined by the available height of the canvas, and scales linearly with it. For example, a 50 by 50 box will appear twice as big on a 1000-pixel-high viewport as on a 500-pixel-high one. The canvas width, on the other hand, does not affect object size. Instead, it can cause the scene to be cut off or extended to the left and right.

Server-side rendering is not possible for Canvas elements, meaning that React Three Fiber scenes can only be rendered in the browser. To ensure that the scenes are rendered responsively™, we can access the size of the device rendering in the browser. Any changes to the viewport can be tracked using a responsive React hook, which will result in the necessary parts of the scene being re-rendered. If you're looking to integrate 3D scenes into an existing server-side rendered setup with Next.js, you may want to consider using the react-three-next starter template. This template is designed to optimize the integration for initial rendering performance.

Landing Page Example

The following web page was created to showcase a React Native plugin called numic, which incorporates all the techniques described in this post for rendering 3D scenes in the browser.

Live DemoCode

🚧 Detour: Web Game Development

A promising and much-needed comprehensive guide on creating web games, called webgamedev.com, has been shared on Twitter. Although the documentation is still largely using placeholders, it looks like it could be a great resource. Gaming is arguably the most significant use case for 3D on the web (and elsewhere). While performance may not be comparable to native games, the web offers exciting advantages for smaller games, as users don't need to install anything, and the games will work on any device or operating system. Connected to this project is webgamer.io, a collection of web games that can be played to get a sense of what's possible. The author of the documentation, Jonathan Verrecchia, has announced that he will devote all of 2023 to improving it. Jonathan's background in web development emphasizes the ease of transitioning from 2D to 3D web development.

In this YouTube video, Paul Henschel, the creator of React Three Fiber, provides an overview of how to use the library to develop games, covering topics such as animations and physics.

One could argue that unlike in other areas of technology there hasn't been nearly as much progress in game design. Even with a lot of effort browser based games are unlikely to surpass any of the video games most of us know from childhood. Thanks to improvements in graphics integration for browsers much more is now possible with just JavaScript. Still, any of the games created for the browser will resemble similar Flash-based games from more than 15 years ago. Browser-based games currently enjoy a niche presence and require a lot of effort and creativity to create.

This post was revised with ChatGPT a Large Language Model.