For the last year and a half or so, I’ve been working on a project I like to call Enhanced 2D Visibility. Visibility mechanics in 2D games have been around since the original Rogue at least, and more recently in games like Monaco. The basic idea is that if you don’t have direct line of sight to an area of the screen, then it shouldn’t be visible.
A screenshot from Monaco
This adds a nice level of immersion by restricting the information the player has to what a Flatland-esque 2D entity might have. The problem with this technique is that it has a distinct lack of GPU-melting recursive structures. This is why I wanted to add a level of complexity on top the standard 2D visibility to support reflection and refraction. Refraction has proven to be difficult, and might not be entirely possible in a real-time context. Reflection, however, is at least a bit simpler and I’m coming close to having a functional tech-demo for it. Before I go into the details of reflection and refraction, let’s go over how basic 2D visibility is usually implemented.
I implemented the basic visibility using a ray-casting approach like the one described here by Red Blob Games. There’s a very nice list of other resources on that same site here if you want to implement this yourself. I’ll give a brief overview of how I implemented this, as the resources above explain it far better than I can.
The major steps are as follows:
- Get a list of all the solid objects in the scene, represented as line segments.
- Store the endpoints of all the segments in a list, sorted by angle from some axis (I used the positive x axis)
- Cast a ray from the player to each endpoint in the scene, getting the intersection closest to the player
- Cast two more rays slightly clockwise and counter-clockwise from the endpoint to get points behind the segment
- Store all these intersection points in a list
- Draw the points as a triangle fan around the player to the OpenGL stencil buffer (or use some other masking method)
- Draw the rest of the scene
That’s a very rough overview, but it gives something that looks like this:
Now, let’s get back to what I meant by GPU-melting recursive structures. What happens when you have a mirror facing another mirror? In real life, it produces a seemingly-infinite halway of reflections of reflections of reflections.
This screams recursion. Whenever you draw a mirror, you then have to draw any mirrors visible in the reflection of that mirror, which may or may not include the original mirror.
Let’s find our entry point. You start at the position of the player, with no reflections applied. Great, let’s push that state onto a queue. Why a queue? The obvious way to do it is to call the render function recursively with the new state, but that can crash too easily. We use an explicit collection so that we can limit the number of recursive iterations to a managable amount. Why a queue and not a stack? With a stack, the rendering process would follow the reflections down to furthest depths of reflections of reflections of one mirror before drawing anything for any other mirros. The queue guaruntees that the first-level mirrors get rendered first, second-level second, etc.
Next, build the visibility mask as described in the last section. However, now we need to do some extra stuff. When we store our points that we get from raycasting, we also want to store which segment they are on. Then, after we’ve gotten all the points, we can go through the list and pick out the visible parts of the reflective segments. That looks something like this:
Then, for each visible mirror segment we need to find the transform matrix for it’s reflection (detailed here). Once we have that, push each matrix to the queue I mentioned earlier, pop the next one off the top and repeat.
There’s one detail we’re missing. We need to know what area of the scene to draw behind the mirror. This is simple enough if we keep a boundary polygon along the reflection matrix in the queue. We just need to find the region behind the mirror and reflect it across the mirror and there we have our boundary polygon. The issue is, although the first layer of mirrors only needs to find the region behind clipped to the screen (a rectangle), every consecutive instance needs to clip to some arbitrary polygon that was the bounds of the previous reflection. This is where I am now, I’ll update when I figure it out.
If we put all this togther, we get something that looks like this:
Not exactly like that, however. That is from a previous attempt at mirrors reflecting mirrors that involved rendering to offscreen textures. The program ran exacly long enough to take a screenshot before it claimed all the memory available and crashed. Hopefully this iteration will go a bit better.
I’m really not sure where to go with refraction. All I know is that it would look cool, and let me make lenses. It doesn’t seem to be a linear transformation. So if I do end up writing it in, it will probably be rather slow and take some craziness to get it to work with OpenGL. If you have any ideas on how to to go about implementing refraction, shoot me an email at the address below.