Photoshop Blend Modes Without Backbuffer Copy

For the past couple of weeks, I have been trying to replicate the Photoshop blend modes in Unity. It is no easy task; despite the advances of modern graphics hardware, the blend unit still resists being programmable and will probably remain fixed for some time. Some OpenGL ES extensions implement this functionality, but most hardware and APIs don’t. So what options do we have?

1) Backbuffer copy

A common approach is to copy the entire backbuffer before doing the blending. This is what Unity does. After that it’s trivial to implement any blending you want in shader code. The obvious problem with this approach is that you need to do a full backbuffer copy before you do the blending operation. There are certainly some possible optimizations like only copying what you need to a smaller texture of some sort, but it gets complicated once you have many objects using blend modes. You can also do just a single backbuffer copy and re-use it, but then you can’t stack different blended objects on top of each other. In Unity, this is done via a GrabPass. It is the approach used by the Blend Modes plugin.

2) Leveraging the Blend Unit

Modern GPUs have a little unit at the end of the graphics pipeline called the Output Merger. It’s the hardware responsible for getting the output of a pixel shader and blending it with the backbuffer. It’s not programmable, as to do so has quite a lot of complications (you can read about it here) so current GPUs don’t have one.

The blend mode formulas were obtained here and here. Use it as reference to compare it with what I provide. There are many other sources. One thing I’ve noticed is that provided formulas often neglect to mention that Photoshop actually uses modified formulas and clamps quantities in a different manner, especially when dealing with alpha. Gimp does the same. This is my experience recreating the Photoshop blend modes exclusively using a combination of blend unit and shaders. The first few blend modes are simple, but as we progress we’ll have to resort to more and more tricks to get what we want.

Two caveats before we start. First off, Photoshop blend modes do their blending in sRGB space, which means if you do them in linear space they will look wrong. Generally this isn’t a problem, but due to the amount of trickery we’ll be doing for these blend modes, many of the values need to go beyond the 0 – 1 range, which means we need an HDR buffer to do the calculations. Unity can do this by setting the camera to be HDR in the camera settings, and also setting Gamma for the color space in the Player Settings. This is clearly undesirable if you do your lighting calculations in linear space. In a custom engine you would probably be able to set this up in a different manner (to allow for linear lighting).

If you want to try the code out while you read ahead, download it here.

A) Darken

Formula min(SrcColor, DstColor)
Shader Output
Blend Unit Min(SrcColor · One, DstColor · One)

darken

As alpha approaches 0, we need to tend the minimum value to DstColor, by forcing SrcColor to be the maximum possible color float3(1, 1, 1)

B) Multiply

Formula SrcColor · DstColor
Shader Output
Blend Unit SrcColor · DstColor + DstColor · OneMinusSrcAlpha

multiply

C) Color Burn

Formula 1 – (1 – DstColor) / SrcColor
Shader Output
Blend Unit SrcColor · One + DstColor · OneMinusSrcColor

color-burn

D) Linear Burn

Formula SrcColor + DstColor – 1
Shader Output
Blend Unit SrcColor · One + DstColor · One

linear-burn

E) Lighten

Formula Max(SrcColor, DstColor)
Shader Output
Blend Unit Max(SrcColor · One, DstColor · One)

lighten

F) Screen

Formula 1 – (1 – DstColor) · (1 – SrcColor) = Src + Dst – Src · Dst
Shader Output
Blend Unit SrcColor · One + DstColor · OneMinusSrcColor

screen

G) Color Dodge

Formula DstColor / (1 – SrcColor)
Shader Output
Blend Unit SrcColor · DstColor + DstColor · Zero

color-dodge

You can see discrepancies between the Photoshop and the Unity version in the alpha blending, especially at the edges.

H) Linear Dodge

Formula SrcColor + DstColor
Shader Output
Blend Unit SrcColor · SrcAlpha + DstColor · One

linear-dodge

This one also exhibits color “bleeding” at the edges. To be honest I prefer the one to the right just because it looks more “alive” than the other one. Same goes for Color Dodge. However this limits the 1-to-1 mapping to Photoshop/Gimp.

All of the previous blend modes have simple formulas and one way or another they can be implemented via a few instructions and the correct blending mode. However, some blend modes have conditional behavior or complex expressions (complex relative to the blend unit) that need a bit of re-thinking. Most of the blend modes that follow needed a two-pass approach (using the Pass syntax in your shader). Two-pass shaders in Unity have a limitation in that the two passes aren’t guaranteed to render one after the other for a given material. These blend modes rely on the previous pass, so you’ll get weird artifacts. If you have two overlapping sprites (as in a 2D game, such as our use case) the sorting will be undefined. The workaround around this is to move the Order in Layer property to force them to sort properly.

I) Overlay

Formula 1 – (1 – 2 · (DstColor – 0.5)) · (1 – SrcColor), if DstColor > 0.5
2 · DstColor · SrcColor, if DstColor <= 0.5
Shader Pass 1
Blend Pass 1 SrcColor · DstColor + DstColor · DstColor
Shader Pass 2
Blend Pass 2 SrcColor · DstColor + DstColor · Zero

overlay

How I ended up with Overlay requires an explanation. We take the original formula and approximate via a linear blend:

{ 2 \cdot Dst \cdot Src \cdot (1 - Dst) + (1 - 2 \cdot (1 - Dst) \cdot (1 - Src)) \cdot Dst}

We simplify as much as we can and end up with this

{ (4 \cdot Src - 1) \cdot Dst + (2 - 4 \cdot Src) \cdot Dst \cdot Dst }

The only way I found to get DstColor · DstColor is to isolate the term and do it in two passes, therefore we extract the same factor in both sides:

{ \Big[(4 \cdot Src - 1) \cdot \frac {Dst} {(2 - 4 \cdot Src)} + Dst \cdot Dst\Big] \cdot (2 - 4 \cdot Src) }

However this formula doesn’t take alpha into account. We still need to linearly interpolate this big formula with alpha, where an alpha of 0 should return Dst. Therefore

{ \Big[(4 \cdot Src - 1) \cdot \frac {Dst} {(2 - 4 \cdot Src)} + Dst \cdot Dst\Big] \cdot (2 - 4 \cdot Src) \cdot a + Dst \cdot (1 - a) }

If we include the last term into the original formula, we can still do it in 2 passes. We need to be careful to clamp the alpha value with max(0.001, a) because we’re now potentially dividing by 0. The final formula is

{ K_1 = \frac{4 \cdot Src - 1} {2 - 4 \cdot Src} }

{ K_2 = \frac{1 - a} {(2 - 4 \cdot Src) \cdot a} }

{ \Big[Dst \cdot (K_1 + K_2) + Dst \cdot Dst \Big] \cdot (2 - 4 \cdot Src) \cdot a }

J) Soft Light

Formula 1 – (1 – DstColor) · (1 – (SrcColor – 0.5)), if SrcColor > 0.5
DstColor · (SrcColor + 0.5), if SrcColor <= 0.5
Shader Pass 1
Blend Pass 1 SrcColor · DstColor + SrcColor · DstColor
Shader Pass 2
Blend Pass 2 SrcColor · DstColor + SrcColor * Zero

soft-light

For the Soft Light we apply a very similar reasoning to Overlay, which in the end leads us to Pegtop’s formula. Both are different from Photoshop’s version in that they don’t have discontinuities. This one also has a darker fringe when alpha blending.

K) Hard Light

Formula 1 – (1 – DstColor) · (1 – 2 · (SrcColor – 0.5)), if SrcColor> 0.5
DstColor · (2 · SrcColor), if SrcColor <= 0.5
Shader Pass 1
Blend Pass 1 SrcColor · One + DstColor · One
Shader Pass 2
Blend Pass 2 SrcColor · DstColor + SrcColor * Zero

hard-light

Hard Light has a very delicate hack that allows it to work and blend with alpha. In the first pass we divide by some magic number, only to multiply it back in the second pass! That’s because when alpha is 0 it needs to result in DstColor, but it was resulting in black.

L) Vivid Light

Formula 1 – (1 – DstColor) / (2 · (SrcColor – 0.5)), if SrcColor > 0.5
DstColor / (1 – 2 · SrcColor), if SrcColor <= 0.5
Shader Pass 1
Blend Pass 1 SrcColor · DstColor + SrcColor · Zero
Shader Pass 2
Blend Pass 2 SrcColor · One+ SrcColor · OneMinusSrcColor

vivid-light

M) Linear Light

Formula DstColor + 2 · (SrcColor – 0.5), if SrcColor > 0.5
DstColor + 2 · SrcColor – 1, if SrcColor <= 0.5
Shader Output
Blend Unit  SrcColor · One + DstColor · One

linear-light

Replay system using Unity

Unity is an incredible tool for making quality games at a blazing fast pace. However, like all closed systems there are some limitations to how you can extend the engine and one such limitation is developing a good replay system for a game. I will talk about two possible approaches and how to solve other issues along the way. The system was devised for a Match 3 prototype, but can be applied to any project. There are commercial solutions available, but this post is intended for coders.

If the game has been designed with a deterministic outcome in mind, the most essential parts of recording are input events, delta times (optional) and random seeds. What this means is the only input available to the game will be the players’ actions, the rest should be simulated properly to arrive at the same outcome. Since storing random seeds and loading as appropriate is more or less straightforward, we will focus on the input.

1) The first issue is how to capture and replay input in the least disturbing way possible. If you have used Unity’s Input class, Input.GetMouseButton(i) should look familiar. The replay system was added after developing the main mechanics, and we didn’t want to go back and rewrite how the game worked. Such a system should ideally work for future or existing games, and Unity already provides a nice interface that other programmers use. Furthermore, plugins use this interface, and not sticking to it can severely limit your ability to record games.

The solution we arrived at was shadowing Unity’s Input class by creating a new class with the same name and accessing it through the UnityEngine namespace inside of the new class. This allows for conditional routing of Unity’s input, therefore passing recorded values into the Input.GetMouseButtonX functions, and essentially ‘tricking’ the game into thinking it is playing real player input. You can do the same with keys.

There are many functions and properties to override, it can take time and care to get it all working properly. Once you have this new layer you can create a RecordManager class and start creating methods that connect with the new Input class.

2) The second issue is trickier to get properly working, due to common misconceptions (myself included) about how Unity’s Update loops work. Unity has two different Update loops that serve different purposes, Update and FixedUpdate. Update runs at every frame, whereas FixedUpdate updates at a fixed, specified time interval. FixedUpdate has absolutely nothing to do with Update. No rule says that for every Update there should be a FixedUpdate, or that there should be no more than one for every Update.

Let’s explain it with two use cases. For both, the FixedUpdate interval is 0,017 s (~60 fps).

a) Update runs at 60 fps (same as FixedUpdate). The order of updates would be:

b) Update runs faster (120 fps). I have chosen this number because it is exactly double that of FixedUpdate. In this case, the order of updates would be as follows:
There is one FixedUpdate every two Update.

c) Update runs slower (30 fps). Same rule as above, but 30 = 60/2

Since FixedUpdate can’t keep up with Update, it updates twice to compensate.

This brings up the following question: where should I record input events, and where should I replay them? How can I replay something I recorded on one computer on another, and have the same output?

The answer to the first question is record in Update. It is guaranteed to run in every Unity tick, and doing so in FixedUpdate will cause you to miss input events and mess up your recording. The answer to the second question is a little more open, and depends on how you recorded your data.

One approach is to record the deltaTime in Update for every Update, and shadow Unity’s Time class the same way we did with Input to be able to read a recorded Time.deltaTime property wherever it’s used. This has two possible issues, namely precision (of the deltaTime) and storage.

The second approach is to save events and link them to their corresponding FixedUpdate tick, that way you can associate many events to a single tick (if Update goes too fast) or none (if Update goes too slow). With this approach you can only execute your code in FixedUpdate, and execute as many times as recorded Updates there are. It’s also important to save the average Update time of the original recording and set it as the FixedUpdate interval. The simulation will not be 100% accurate in that Update times won’t fluctuate as they did in the original recording session, but it is guaranteed to execute the same code.

ScriptExecutionOrder

There is one last setting that’s needed to properly record events, which is set the RecordManager to record all input at the beginning of every frame. Unity has a Script Execution Order option under Project Settings where you can set the RecordManager to run before any other script. That way recording and replaying are guaranteed to run