Obscurus Development | Shadow fixes, optimizations, and thesis reflections

Shadow fixes (sorta)

After getting in contact with Bitsquid, I have confirmed my suspicions; the renderer in this alpha-state Oculus Rift build renders the lights from different offsets. This is apparently caused by how the engine is structured, and my main contact with Bitsquid is looking into how difficult it would be to restructure the renderer so that the problem disappears.

Optimizing in Bitsquid

This is a topic that deserves it’s own blog post, and I’ll get to that, but for now I just want to document the steps I’ve taken to optimize my game. Working in Bitsquid can be very interesting, as it is a data driven engine unlike any other conventional game engine like Unity or UDK. It means you can do certain things a lot faster or in a different way to other engines, and tricks that work in those engines might not necessarily work in Bitsquid. This became very evident for me when I started to optimize Obscurus for the Oculus Rift.

In order to achieve the best possible experience in virtual reality, you need to get as low a latency between the player moving their head and the game updating the view port. 20ms is the upper threshold according to John Carmack, Oculus Rift’s new CTO. It is also advisable to get your game running at as high a frame rate as possible, and to keep it stable.

60 frames per second is my goal for Obscurus, and I thought it would be easily attainable, with it being a confined corridor game, most objects should be obscured. Lo and behold my surprise when I was getting 25fps on a powerful rig! It took some time to figure out what was happening, and asking the Bitsquid Q&A forum. The forum is great in that it has a lot of talent, mostly Swedish, that are working in the engine. The only problem that it is a very small pool of developers, something that will change as Bitsquid gets more widely adopted, I’m sure.

I got some sound advice on how to optimize my game. I had carelessly just copy-pasted my assets (I love this feature!), building my entire level with the basic building block of wall, roof, and floor. The problem with building this way is the amount of draw calls you get. I was submitting more than 20,000 batches per frame. That really taxes your computer, no matter how powerful!

Type ‘Perfhud artist’ for a useful overlay

So how should I go about fixing this problem with my shadow casters? Simple, really. I instanced my floor, wall, and roof textures, as they are repeated a few hundred times each. This immediately garnered an almost 100% increase in frames per second. The next step is to combine as many assets into larger assets as possible. I am currently in the process of creating larger building blocks comprised of my simpler smaller assets. 3×3 wall sections, 1×3 floor sections and so on. This will also speed up future levels that I’ll create.

Making new building blocks

At the same time as I was doing this work in Maya, I’m also creating occluders inside the objects. Occluders are create big planes that hide anything behind them. You don’t expect to draw something behind a wall, so you put an occluder in the wall, and voilá nothing is drawn. An occluder in action can be seen below.

Occluders in action

The downside with me having to do this is that it takes a bit of time to create all the building blocks I need to recreate my level. Then of course I need to actually rebuild my level, rewriting the Flow scripts for it too. As mentioned though, it will really speed up future development of the title.

A final optimization tip, which I luckily had already implemented, is to create simplified meshes as you are creating your assets that can be used for shadow casting. It isn’t necessary to have the super high detailed mesh for shadow casting, but a simpler series of boxes will often suffice. It can also be a good idea to create level of detail meshes at this point, and use the shadow mesh as the lowest level of detail. An example of LOD and a shadow caster from the Fatshark game Hamilton’s Great Adventure can be seen below.

Level of Detail and Shadow Caster meshes

Master Thesis reflections

I had the pleasure to listen to my friend, Andreas Tarandi perform a first presentation of his Master’s Thesis in Computer Science that he did for DICE. It detailed a new sort of renderer for tessellated objects that in real time can provide the texture for an object, without using the archaic notion of UV mapping. Instead, his method uses triangle strips storing the colour values of the triangles’ hexes in an array. In this way, he could render an obscenely high poly face (millions of triangles) in 30+ fps (real time).

It was especially interesting as I myself am in the process of starting my master thesis. I’ve applied to a few companies already, about doing different work for them, including Oculus Rift and tools to integrate user created content. I’m hoping they respond soon, as I can’t wait to start working in the video game industry, and think that I bring many competencies and qualities to the table.

Advertisements

Obscurus development | Shadow problems

Working in Bitsquid is fun. Working in Bitsquid as a student is not as much fun. Working in Bitsquid as a lone developer is down right what it is: work.

I have been working on my first-person pacifist puzzle game Obscurus since November 2013 now, and I am very close to finishing up the demo/first level. The game is meant to be played with an Oculus Rift or other Head Mounted Display in order to immerse the player in a virtual reality. The game’s puzzles rely on the player being immersed as if in a real world, from being blinded, to achieving vertigo or getting lost in a cramped maze. Development has gone really well, until it was time to integrate Oculus Rift support for real.

Surprisingly the main problem isn’t that Bitsquid doesn’t have native support for it. I got in contact with the nice folks at Bitsquid and they hooked me up with an experimental build. After some frustration (more like a TON of frustration), that ended up being that I forgot to render environment to the stereo renderer, I finally got to play my game in VR.

However amazing it was to work in it, I noticed some interesting errors. The dynamic shadows were different in each eye. After diving into the shader code in HLSL, I have stared myself blind, and I’m no closer to solving the problem. Hopefully the new week can give me some fresh eyes and finally solve it.