This article was generated 100% by AI, using GPT-5.4.

Introduction

This site places a faint transparent-watercolor-like sheet of paper and bleeding behind the content. Visually it is a quiet decoration, but internally it is not a single image. It layers paper drawn with WebGPU, brush marks baked with Canvas 2D, and normal DOM content.

First, this is not a physically correct watercolor simulation. If we wanted to solve watercolor seriously, we would at least need fields like these.

water(x, y, t)      water amount
pigment(x, y, t)    pigment concentration
paper(x, y)         paper relief, fiber, absorbency
velocity(x, y, t)   water velocity
dryness(x, y, t)    dryness

From there, diffusion, advection, absorption, and sedimentation would need to update every frame. Even if we only look at pigment, a very rough form would look like this.

pigment_next =
  pigment
  + diffusion_rate * laplacian(pigment)
  - absorption_rate * paper_absorbency * pigment
  - sedimentation_rate * pigment

That is interesting, but it is too much for a background meant to support reading. What this site needs is not complete physics, but enough cues for a reader to feel "thin pigment on paper."

So the implementation decomposes transparent watercolor into these perceptual pieces.

  • the paper has small relief and fiber

  • thin pigment is not deposited perfectly evenly

  • the edge is not a circle and warps slightly

  • pigment pools in some places and is lifted by water in others

  • bleeding does not happen exactly at draw time, but spreads slightly later

  • it still does not interfere with text, scrolling, or links

This decomposition is almost the whole design. Instead of writing a giant fluid shader, split the effect into small layers that do not break the page when they fail.

Split The Layers

The on-screen stack looks like this.

paper fallback canvas
paper WebGPU canvas
brush 2D canvas
page content
controls

The paper always exists as a fixed background. When WebGPU is available, paper WebGPU canvas draws the paper texture. When WebGPU is not available, the pre-rendered paper fallback canvas remains visible. The brush bakes marks into brush 2D canvas only when the user enables it. The body content stays as normal DOM above these layers.

The reason for this structure is to separate expression from failure scope. If WebGPU initialization fails, paper remains. If the brush is heavy on a device, the user can turn it off. The background canvases use pointer-events: none, so they do not steal links or scrolling.

Decorative rendering should be designed not only for how good it looks when it succeeds, but also for how quietly it fails. If the background behaves like it is more important than the text even for a moment, the writing site has lost.

Bake One Sheet Of Paper With WebGPU

WebGPU is used only for the paper layer. The reason is simple: the paper needs to cover the whole screen, but it barely changes over time. It only needs to be drawn on initialization and resize, so sending it to the GPU is worthwhile.

Initialization asks for a low-power adapter.

const adapter = await gpu.requestAdapter({ powerPreference: "low-power" });
const device = await adapter.requestDevice();
const format = gpu.getPreferredCanvasFormat();

Not asking for high-performance is intentional. This background is not the main actor. There is no need to wake the strongest possible GPU just to draw reading paper.

Rendering uses a fullscreen triangle. There is no vertex buffer; the three vertices are derived from vertex_index.

var positions = array<vec2f, 3>(
  vec2f(-1.0, -1.0),
  vec2f(3.0, -1.0),
  vec2f(-1.0, 3.0)
);

This triangle covers the whole clip space. It uses fewer vertices than a rectangle made from two triangles, and it has no diagonal seam. That makes it a good fit for a full-screen procedural texture like paper.

On the JavaScript side, the uniform buffer is 80 bytes, with room for alignment and padding. It is packed as a Float32Array containing resolution, seed, paper tooth strength, fiber density, light direction, and related values.

return new Float32Array([
  width,
  height,
  seed,
  0.88, // embossDepth
  0.32, // grainScale
  0.56, // fiberDensity
  0.24, // fiberLength
  0.42, // pulpDensity
  0.06, // warmTint
  0,
  0,
  0.72, // shadowStrength
  0,
  -0.56, // lightYaw
  -0.26, // lightPitch
  0,
  0,
  0,
  0,
  0,
]);

The WGSL struct means this.

struct Uniforms {
  resolution: vec2f,
  seed: f32,
  embossDepth: f32,
  grainScale: f32,
  fiberDensity: f32,
  fiberLength: f32,
  pulpDensity: f32,
  warmTint: f32,
  coolTint: f32,
  bloomStrength: f32,
  shadowStrength: f32,
  vignette: f32,
  lightYaw: f32,
  lightPitch: f32,
  _padding: vec2f,
};

The important part is that the shader does not receive time. The paper parameters are determined by the viewport and seed, and the paper does not move while someone reads. If the paper moves, it stops looking like watercolor paper and starts looking like a screensaver.

Build Paper Coordinates

In the fragment shader, uv is first converted into paper coordinates.

let aspect = uniforms.resolution.x / max(uniforms.resolution.y, 1.0);
let centered = (input.uv - 0.5) * vec2f(aspect * 1.72, 1.72);
let point =
  rotate(centered * vec2f(1.0, 1.04), -0.06)
  + vec2f(uniforms.seed * 6.4, uniforms.seed * 3.2);

The work here is plain, but it matters.

  • aspect absorbs differences in viewport shape

  • 1.72 sets paper grain density relative to the screen

  • vec2f(1.0, 1.04) adds a slight vertical stretch

  • rotate(..., -0.06) keeps the grain from aligning too perfectly to the screen axes

  • seed keeps every viewport from producing the same-looking pattern

A paper-like texture becomes CG-like very quickly when it follows screen coordinates perfectly. When fibers and creases line up horizontally and vertically, they begin to look like cloth or a grid. That is why the coordinates are tilted just a little.

Split Noise By Role

The basic ingredients of the paper shader are hash21, noise2, fbm, and ridgedFbm. noise2 is value noise made by interpolating hashed grid points, and fbm layers four octaves of that noise.

fn fbm(seed: vec2f) -> f32 {
  var value = 0.0;
  var amplitude = 0.5;
  var point = seed;

  for (var index = 0; index < 4; index = index + 1) {
    value = value + noise2(point) * amplitude;
    point = point * 2.02 + vec2f(11.7, 5.9);
    amplitude = amplitude * 0.5;
  }

  return value;
}

Normal fbm is used for loose paper variation and warp. ridgedFbm, on the other hand, uses 1.0 - abs(noise * 2.0 - 1.0) to create ridge-like values. That works well for broad wrinkles and relief.

This shader does not treat noise as one generic "random pattern." It treats each noise field as a different material role.

  • embossField: broad relief and pits

  • microGrainField: fine grain

  • fiberField: longer flowing fibers

  • fibrilDetailField: short line-segment fibers per cell

  • celluloseNetworkField: Voronoi-like pores and ridges

  • coldPressToothField: cold-press paper tooth

  • crumpleField: large folds and distortion

  • wrinkleLineField: fine wrinkles

In transparent watercolor, pigment looks different on the high and low parts of the paper. So paper cannot be only color noise. It needs to be built as a height field.

Derive Normals From The Height Field

The final paper height is made by paperHeight(point). Inside it, macro, detail, micro, emboss, tooth, grain, and pulp values are combined with weights.

return macroRelief * 0.34
  + detail.x * 2.16
  + micro * (0.76 + uniforms.fiberDensity * 0.16 + uniforms.grainScale * 0.1)
  + (emboss - 0.18) * (0.18 + uniforms.embossDepth * 0.24)
  + (tooth.y - 0.28) * (0.46 + uniforms.embossDepth * 0.34)
  - (tooth.x - 0.18) * (0.34 + uniforms.shadowStrength * 0.22)
  + (grain - 0.5) * 0.018
  + (pulp - 0.3) * 0.018;

Here tooth.y is treated as raised parts, while tooth.x is treated as pores or recessed parts. They are not added with the same sign. Raised parts become bright ridges, while recessed parts later become cavities where multiplied pigment appears to settle.

The normal is not computed analytically. paperHeight is sampled twice near the point, in a shape close to a central difference.

fn paperNormal(point: vec2f) -> vec3f {
  let epsilon = 0.00076;
  let center = paperHeight(point);
  let dx = paperHeight(point + vec2f(epsilon, 0.0)) - center;
  let dy = paperHeight(point + vec2f(0.0, epsilon)) - center;
  let strength = 3.1 + uniforms.embossDepth * 2.2 + uniforms.grainScale * 1.1;
  return normalize(vec3f(-dx * strength, -dy * strength, 1.0));
}

If epsilon is too small, the numerical difference becomes hard to see. If it is too large, fine fibers collapse. Here it is set to 0.00076 to fit the visible paper grain. It is not a physical unit; it is a screen-space unit for making the paper look fuzzy without making it too rough.

Light is built from yaw and pitch.

let xy = cos(pitch);
return normalize(vec3f(cos(yaw) * xy, sin(yaw) * xy, sin(-pitch)));

The uniform values are lightYaw = -0.56 and lightPitch = -0.26, so the light hits weakly from an upper diagonal direction. Frontal light kills the relief, while a strong grazing light makes the paper too loud.

Choose Paper Tone With Cavities And Ridges

At the end of the fragment shader, paper color is built from the normal and height. The base color is a slightly warm white.

var tone = vec3f(0.952, 0.947, 0.927);

Then cavity and ridge masks are built and mixed in.

let cavityMask = saturate(
  max(0.0, -localRelief * 1.84)
    + tooth.x * 0.68
    + (1.0 - normal.z) * 0.4
    + max(0.0, -diffuse) * 0.16
    + crumple.x * 0.08
    + wrinkleLines.x * 0.05
);

let ridgeMask = saturate(
  max(0.0, localRelief * 1.42)
    + tooth.y * 0.62
    + detail.y * 0.06
    + max(0.0, diffuse) * 0.18
    + crumple.y * 0.06
    + wrinkleLines.y * 0.04
);

Cavities are lowered a little.

tone = mix(
  tone,
  vec3f(0.824, 0.813, 0.782),
  cavityMask * (0.18 + uniforms.shadowStrength * 0.12)
);

Ridges are lifted a little.

tone = mix(
  tone,
  vec3f(0.975, 0.97, 0.949),
  ridgeMask * (0.082 + uniforms.embossDepth * 0.06)
);

Finally, clamp(tone, vec3f(0.72), vec3f(1.0)) prevents the color from collapsing too far. The paper is a background, so it should not create black shadows. It should get dark enough to read as recessed, but not so dark that it steals contrast from the text.

Keep The Paper Still

WebGPU rendering creates one command encoder in render(), calls draw(3), and submits it.

renderPass.setPipeline(pipeline);
renderPass.setBindGroup(0, bindGroup);
renderPass.draw(3);
renderPass.end();
device.queue.submit([commandEncoder.finish()]);

After that, it is not redrawn until resize or orientation change. The paper is not "a picture that changes over time." It is the material of the screen.

DPR is also capped at 1.25.

const dpr = Math.min(window.devicePixelRatio || 1, maxDpr);
const width = Math.max(1, Math.round(window.innerWidth * dpr));
const height = Math.max(1, Math.round(window.innerHeight * dpr));

Rendering background paper at the device's native DPR adds very little information for the reader. But it does increase the fragment count. For example, a 1440 x 900 viewport is about 2.0M pixels at DPR 1.25, but about 5.2M pixels at DPR 2. Paying more than 2.5x the fragment work for paper roughness is not a good trade.

Draw The Fallback Canvas As Insurance

To keep the paper from disappearing when WebGPU is not available, a 2D canvas fallback is drawn first. It uses a base color, radial wash, patches, fibers, and a grain tile.

base fill       #f1eedf
shadow patches  820 * areaScale
highlight       520 * areaScale
fibers          260 * areaScale
grain tile      384 x 384

The fallback is simpler than the WebGPU shader, but it is much better than a flat solid background. When the WebGPU layer initializes, the fallback opacity is lowered. So the fallback is not an "old drawing"; it is a safety net underneath the WebGPU layer.

Connect It As A Vue Island

This background is a Vue component, but the whole page is not running as a client app. It is registered as a Vuerend island, and only the necessary background part hydrates on the client.

export const WatercolorBackgroundIsland = defineIsland("watercolor-background", {
  component: WatercolorBackgroundIslandView,
  load: () => import("./features/ShellWatercolorBackgroundIslandLoader"),
  hydrate: "load",
});

Each route only places this island through a thin wrapper.

<script setup lang="ts">
import { WatercolorBackgroundIsland } from "../Islands";
</script>

<template>
  <WatercolorBackgroundIsland />
</template>

The background is a separate concern from route content, so WebGPU and brush state are not brought into the route component. The same background can be placed on any page, and changes to the background implementation do not leak into article or list components.

The Vue component template declares only the visual layers and controls.

<div class="watercolor-bg-container" aria-hidden="true">
  <span ref="fallbackLayerElement" class="paper-fallback" />
  <canvas ref="fallbackCanvasElement" class="paper-fallback-canvas" aria-hidden="true" />
  <canvas ref="paperCanvasElement" class="paper-canvas" aria-hidden="true" />
  <canvas ref="brushCanvasElement" class="brush-canvas" aria-hidden="true" />
</div>

It uses aria-hidden="true" because this is decoration, not content. There is no need for a screen reader to announce that there are three canvases.

In CSS, each one is a fixed layer.

.paper-fallback        z-index: -4
.paper-fallback-canvas z-index: -3
.paper-canvas          z-index: -2
.brush-canvas          z-index: -1

The canvases themselves use pointer-events: none, so they do not capture user interaction. The brush rendering listens to window pointer events instead of attaching pointer events directly to the canvas. That lets it behave as a background regardless of whether the canvas layer is above or below DOM content.

The state Vue owns in script setup is small.

const fallbackLayerElement = ref<HTMLElement>();
const fallbackCanvasElement = ref<HTMLCanvasElement>();
const paperCanvasElement = ref<HTMLCanvasElement>();
const brushCanvasElement = ref<HTMLCanvasElement>();
const isDrawingEnabled = ref(false);

Only DOM refs and the user-visible isDrawingEnabled flag are reactive. The WebGPU device, CanvasRenderingContext2D, and wet bloom arrays are not Vue state.

The brush renderer receives the canvas and enabled flag as functions.

const brushLayer = createBrushLayer({
  canvas: () => brushCanvasElement.value,
  isDrawingEnabled: () => isDrawingEnabled.value,
  maxDpr,
});

This is the narrow connection point between Vue and the renderer. brushLayer reads brushCanvasElement.value and isDrawingEnabled.value only when it needs them, but Vue does not own the drawing history. Vue only answers "is it allowed to draw right now?" and does not participate in the internal stroke state.

Startup is kept inside onMounted. localStorage, window, navigator.gpu, and canvas contexts cannot be touched during server render.

onMounted(() => {
  isDrawingEnabled.value = localStorage.getItem(brushStorageKey) === "true";
  renderPaperFallback(fallbackCanvasElement.value, maxDpr);
  brushLayer.resize();
  window.addEventListener("resize", queueResize, { passive: true });
  window.addEventListener("orientationchange", queueResize);
  window.addEventListener("pointerdown", brushLayer.draw, { passive: true });
  window.addEventListener("pointermove", brushLayer.draw, { passive: true });

  if (paperCanvasElement.value) {
    void initializePaperShader({
      fallbackCanvas: fallbackCanvasElement.value,
      fallbackLayer: fallbackLayerElement.value,
      maxDpr,
      paperCanvas: paperCanvasElement.value,
    }).then((cleanup) => {
      paperCleanup = cleanup;
    });
  }
});

The order is intentional. First, the brush on/off state is restored from localStorage. Next, the fallback paper is drawn into a 2D canvas. After that, the brush canvas is resized to the viewport. Finally, the WebGPU paper initializes asynchronously. Even if WebGPU is late, fallback paper is visible in the meantime.

Resize is not handled directly; it is batched into one frame.

function queueResize() {
  if (resizeFrame !== 0) {
    return;
  }

  resizeFrame = requestAnimationFrame(() => {
    resizeFrame = 0;
    renderPaperFallback(fallbackCanvasElement.value, maxDpr);
    brushLayer.resize();
  });
}

Canvas resize recreates the bitmap, so doing it on every event during continuous resize can be expensive. Batching with requestAnimationFrame is enough to reduce a lot of the perceived hitching.

The controls also touch only Vue state and the brush layer's public API.

function toggleDrawing() {
  isDrawingEnabled.value = !isDrawingEnabled.value;
  localStorage.setItem(brushStorageKey, String(isDrawingEnabled.value));
}

function clearBrushCanvas() {
  brushLayer.clear();
}

Clear only erases the brush canvas. It does not reset the paper fallback or WebGPU paper. Paper is the material of the screen, while the brush is pigment the user placed on top, so the erase unit is separate too.

Cleanup is explicit.

onUnmounted(() => {
  if (resizeFrame !== 0) {
    cancelAnimationFrame(resizeFrame);
  }
  window.removeEventListener("resize", queueResize);
  window.removeEventListener("orientationchange", queueResize);
  window.removeEventListener("pointerdown", brushLayer.draw);
  window.removeEventListener("pointermove", brushLayer.draw);
  paperCleanup?.();
});

Inside paperCleanup, the WebGPU context also calls unconfigure(). For a component that installs page-wide event listeners like a background does, I think unmount deserves at least as much care as mount.

Why The Brush Is Not WebGPU

The paper is WebGPU, but the brush is drawn with Canvas 2D. This was a surprisingly important decision.

The brush is local rendering tightly coupled to pointer events. The required state is only sequence, the last coordinates, the last draw time, and a short-lived bloom array. Putting that into WebGPU texture ping-pong or storage buffers would make the implementation heavier than the background decoration deserves.

On this site, the brush is baked into Canvas 2D in this order.

pointer event
-> determine size from pressure
-> create seeded randomness
-> fill an organic Path2D
-> add pigment pools
-> add granulation
-> lift water with destination-out
-> add capillary bloom for a few frames

In other words, the brush layer is not a simulation buffer. It is a drawing material that bakes history. It does not recompute every stroke on every frame. That matters a lot.

Stroke Size And Seed

Brush coordinates are multiplied by DPR to match the canvas backing buffer.

const rect = canvas.getBoundingClientRect();
const dpr = canvas.width / Math.max(1, rect.width);
const x = (event.clientX - rect.left) * dpr;
const y = (event.clientY - rect.top) * dpr;

Pointer pressure is used for brush pressure, and 0.6 is used when it is zero. Mice sometimes do not provide pressure; without a fallback, the brush would become too thin.

const pressure = event.pressure > 0 ? event.pressure : 0.6;
const size = (48 + pressure * 84) * dpr;

With this formula, pressure 0.6 becomes 98.4 * dpr, and pressure 1.0 becomes 132 * dpr. That is a little large for a watercolor background brush, but alpha is low, so without that much area the mark looks like a dot rather than a wash.

The random seed is made from sequence and coordinates.

const seed = ((sequence + 1) * 2654435761 + Math.round(x * 13) + Math.round(y * 17)) >>> 0;
const random = mulberry32(seed);

2654435761 is used as a constant that spreads values within the 32-bit range. Even with the same sequence, different coordinates produce different shapes, and even with the same coordinates, advancing the sequence gives a different pigment pool. Cryptographic randomness is not needed. What is needed is stable randomness that removes the feeling of a hand-made circle from the stroke.

Make Organic Patches Instead Of Circles

The biggest enemy of a watercolor look is a circle that is too clean. With only a Canvas radial gradient, the center and outer ring are too visible.

So the painted area is warped with Path2D. Eighteen points are placed around a circumference, and their radius is perturbed using 2x, 3x, and 5x sine waves plus jitter.

const ripple =
  Math.sin(angle * 2 + phaseA) * irregularity * 0.54 +
  Math.sin(angle * 3 + phaseB) * irregularity * 0.34 +
  Math.sin(angle * 5 + phaseC) * irregularity * 0.18;
const jitter = (random() - 0.5) * irregularity * 0.28;
const scale = Math.max(0.58, 1 + ripple + jitter);

The periods 2, 3, and 5 drift against each other, so the result is less likely to become a simple flower shape. Math.max(0.58, ...) also keeps it from collapsing too much.

The stroke body fills that patch with a radial gradient.

gradient.addColorStop(0, `rgba(${color.r}, ${color.g}, ${color.b}, ${color.a * 1.72})`);
gradient.addColorStop(0.3, `rgba(${color.r}, ${color.g}, ${color.b}, ${color.a * 1.18})`);
gradient.addColorStop(0.62, `rgba(${color.r}, ${color.g}, ${color.b}, ${color.a * 0.42})`);
gradient.addColorStop(0.88, `rgba(${color.r}, ${color.g}, ${color.b}, ${color.a * 0.08})`);
gradient.addColorStop(1, `rgba(${color.r}, ${color.g}, ${color.b}, 0)`);

The base alpha is usually around 0.04 to 0.05. Even at the center, 0.05 * 1.72 = 0.086, so one draw does not become strong. Transparent watercolor is more about color that appears through overlap than color painted in one shot.

The composite mode is multiply. That keeps the pigment from fully covering the white paper and lets it sink into the paper's light and dark texture.

Pigment Pools, Granulation, And Water Lift

After the main stroke, four layers are added where pigment pools more densely in a few places.

for (let index = 0; index < 4; index += 1) {
  drawBrushPool(context, random, color, size, 0.82 + random() * 0.3);
}

drawBrushPool places a small radial gradient within size * 0.42 of the center. The radius is size * (0.16 + random() * 0.24), and the alpha is color.a * (0.72 + random() * 0.48) * intensity. This creates the feeling that pigment drifts slightly even inside the same water.

Granulation is added with dots and short curves. There are only 10 dots and 5 curves.

context.arc(
  Math.cos(angle) * distance,
  Math.sin(angle) * distance,
  size * (0.006 + random() * 0.016),
  0,
  Math.PI * 2,
);

If too many grains are placed, it stops looking like paper texture and starts looking like noise. Keeping it around 10 makes pigment grain visible up close without making the background noisy behind text.

Then water lift is added with destination-out.

context.globalCompositeOperation = "destination-out";

This slightly erases pigment that was already drawn. In watercolor, wet areas do not only become uniformly darker; water can push pigment away and create pale holes. destination-out is not physical water, but visually it is a cheap way to create this lifted-out look.

Delay The Bleeding Briefly

The worst part of the first implementation was that the bleeding looked like a double ring. Expanding a circular gradient over time makes the center, ring, and outer edge separate too clearly. That looks closer to a UI ripple effect than watercolor.

The current implementation pushes a short-lived bloom into wetBlooms after a stroke is placed.

wetBlooms.push({
  age: 0,
  maxAge: 6 + Math.floor(random() * 3),
  driftX: (random() - 0.5) * size * 0.16,
  driftY: (random() - 0.5) * size * 0.16,
  size: size * (0.94 + random() * 0.14),
});

The lifetime is only 6 to 8 frames. At 60 fps, that is a short illusion of about 100 to 133 ms. If it moves for too long, it stops looking wet and starts looking animated.

Bloom progression uses ease-out.

const progress = bloom.age / bloom.maxAge;
const eased = 1 - (1 - progress) ** 3;

The radius grows with that progression.

const radiusX = bloom.size * (1.04 + eased * 0.76);
const radiusY = bloom.size * (0.84 + random() * 0.2 + eased * 0.52);
const alpha = bloom.color.a * (1 - progress * 0.58) * 0.84;

The alpha itself gradually falls. But because the radius expands, outer pixels receive color later. Visually, that reads less like "the center keeps getting darker" and more like "water reaches the outer edge."

The bloom is also drawn with createOrganicPatchPath, not a circle. Its center is shifted slightly with focusX and focusY. That avoids concentric rings.

const focusX = (random() - 0.5) * bloom.size * 0.18;
const focusY = (random() - 0.5) * bloom.size * 0.18;
const path = createOrganicPatchPath(radiusX, radiusY, random, 0.5);

The number of retained blooms is capped.

const maxWetBlooms = 96;

Even if the pointer moves aggressively, old blooms are dropped. For a background brush, page responsiveness matters more than preserving a perfect drawing history.

Do Not Change Color Too Quickly Inside A Stroke

The brush color changes every 36 draws.

const brushColorHoldDraws = 36;
const colorIndex = Math.floor(sequence / brushColorHoldDraws) % brushColors.length;

If color changes on every pointermove, a rainbow marker runs across the paper. For transparent watercolor, the same pigment needs time to spread through water and become thinner.

The number 36 works together with input throttling. The throttle described below has a minimum of 18 ms, so even during continuous drawing, 36 draws last at least about 648 ms. In practice, the distance condition also applies, so one color remains a little longer than that.

The color array itself uses pale, low-alpha colors rather than strong primary colors. It contains blue, red, yellow, green, and purple, but each one is around a = 0.04 to 0.05. For watercolor in the background, layered density matters more than flashy hue changes.

Throttle Input

Drawing every pointermove gets expensive quickly. It also makes the line too dense and marker-like.

So input is throttled by both time and distance.

if (
  event.type === "pointermove" &&
  now - lastDrawTime < 18 &&
  Math.hypot(x - lastDrawX, y - lastDrawY) < size * 0.08
) {
  return;
}

If less than 18 ms has passed and the movement is less than 8% of brush size, nothing is drawn. Throttling only by time leaves too many gaps when the pointer moves quickly. Throttling only by distance creates too many events when the pointer moves slowly. Checking both balances appearance and cost.

This is not only an optimization. Subtle watercolor unevenness comes from not drawing too much. It is interesting when a performance constraint turns directly into a material constraint.

Keep Mutable State Out Of Vue

Even though this is connected as a Vue island, the brush internals are not put into Vue reactivity. Inside createBrushLayer, renderer state is closed over in the drawing object.

sequence
lastDrawX
lastDrawY
lastDrawTime
wetBlooms
wetBloomFrame

These are renderer state, not UI state. Making values reactive when they do not update the template only adds tracking overhead and makes the code harder to see through. From Vue's point of view, the brush layer can be a small imperative object with draw, resize, and clear.

When interactive graphics live inside a UI framework, I think it is important to separate "state the user can see" from "state only the renderer needs to know."

Keep The Background From Hurting Reading

The background canvases sit below page content. Controls sit above, but the brush only draws from pointer events when the user enables it. The tooltip only appears on hover or focus.

This is more important than it may look. No matter how well the watercolor is rendered, it fails if it reduces text contrast. On this site's home page, an overlay and blur are added where the article list overlaps the hero. The watercolor system is not only the pretty pigment; it also includes the layers that keep text readable.

What To Test

Unit tests are not enough for this kind of rendering. Having a canvas element and having the correct image on it are different things.

This site uses Playwright to check at least these properties.

  • the fallback canvas is not blank

  • the WebGPU canvas initializes when available

  • the graph view and article UI are not broken by the background

  • the tooltip does not remain visible forever

  • code blocks and authored Markdown SVGs stay within the theme

  • brush color does not change too quickly inside one stroke

  • the bleeding edge gains alpha after a delay

  • a pointer burst finishes within a bounded time

It is hard to assert "does not look like a double ring" directly. But it is possible to measure that outer alpha changes later, that hue does not jump wildly, and that the canvas is not blank. Even for visual expression, if you decompose the properties you want to protect, they can become regression tests.

Summary

When doing transparent watercolor on the Web, you do not need to start with an accurate fluid simulation. At least on this site, these decisions mattered.

  • split paper and brush into separate layers

  • bake the paper once with a WebGPU fullscreen triangle

  • in the paper shader, build height, normals, cavities, and ridges instead of only color noise

  • do not pass a time uniform, and keep the paper still

  • cap DPR at 1.25 so the background does not pay for unnecessary fragments

  • bake the brush into Canvas 2D instead of making it a stateful WebGPU simulation

  • build strokes with organic Path2D, low alpha, and multiply

  • add pigment pools, granulation, and destination-out water lift separately

  • delay bleeding as a short 6 to 8 frame bloom

  • hold color for 36 draws so it does not become a rainbow marker

  • throttle pointermove by time and distance

  • keep renderer mutable state out of Vue reactivity

  • in Playwright, measure the properties that must not break instead of chasing pixel perfection

Making something look real and making it heavy are not the same thing. Transparent-watercolor-ness is not made by solving water, paper, pigment, and time perfectly. It can be made by layering the parts a reader perceives lightly, briefly, and in ways that fail quietly.