E.X.P.L.O.R. in development now! Cool Developer Stuff

Pages: 1 2 4 5 ...6 ...7 8

Normal Mapping - August 4, 2011

The newest feature implemented in the game is normal mapping (a form of bump mapping). This is a method for creating more detail in a rendered mesh without actually creating more geometry.

In the early days of computer graphics, lighting wasn’t calculated at all, later, lighting was calculated on a per vertex basis, today, graphics cards are capable of calculating lighting on a per pixel basis, and normal mapping is one form of doing that.

Lighting can be computing using the normal vector of a surface, in combination with the direction that the light is coming in. On a per vertex basis, this is a fairly easy computation since vertices typically have a normal attached to them. On a per pixel basis this is a little more difficult, since each pixel doesn’t have a vertex attached to it, and that is where the normal map comes in. The normal map is loaded as if it were a texture, but each pixel in the texture is treated as if it were a normal.

An example of a normal map.


From there is seems like it would be easy to calculate lighting based on that. Wrong. Consider that a texture is actually wrapped around a mesh. So each normal in the texture isn’t actually the normal of the pixel we are rendering. It’s a normal that doesn’t take any regard to the orientation of the vertex.

Therein is the problem with normal mapping. It is necessary to transform the per-pixel normal into the space of the vertex. Or, alternatively, transform the direction of the light into the space of the vertex, which is a more useful computation.

Transforming a light vector requires an orthogonal transformation into the vector’s tangent space. And for this to be accomplished a tangent vector must be computed for each vertex. The tangent vector depends upon the normal of the vertex, in combination with the orientation of the texture about that vertex. It’s a lot of linear algebra, and theory that I don’t want to explain, but after programming a complex algorithm, the tangent vector is now computed in the Emergence engine, which in turn can be used for per pixel lighting in a pixel shader. The result is thus,

Note that the spherical surface on this gun is not dependent upon mesh geometry, but rather it is depended upon the normal map (as seen above). The results look much more dramatic in the actual engine where you can move through different light sources.


Categories: Development

Shadows - July 29, 2011

One of the most difficult things to implement in a game engine is shadows. A shadow itself isn’t difficult to implement. The difficult part is getting shadows to work with the game engine itself. Important questions are how do you decide which direction a shadow get’s cast, what happens as an entity moves through different light sources, how do you determine shadow visibility.

I have just implemented volumetric shadows in my game engine. This is a process of creating a shadow volume and then everything inside that volume gets darkened by the shadow. This technique requires that any mesh that is to cast a shadow, but be a closed mesh, which basically means you can fill the mesh up with water, and it won’t spill out. Most of the meshes I developed appeared to be closed, but they had little leaks, and so when I first tried to implement shadows I ran into all sorts of problems. I knew exactly why the problems appeared, but I didn’t want to deal with modifying the meshes since I’m not an artist.

I ultimately decided to modify the entity definition file format with a new tag <shadow>. This new tags allows for a special mesh to be specified, the shadow mesh, then the entity will cast a shadow of the shadow mesh. Ideally the shadow mesh should resembled the actual mesh, because this is the mesh that will be used to cast the shadow. In that way a separate, fully closed mesh, can be specified as a shadow. This mesh can be simpler than the original, but it should have the same bone structure so that it can be animated properly.

With this new shadow mesh, I was able to cast shadows without problems. However, that’s where I ran into the more important question of, what direction should a shadow be cast? I kind of figured that since I was using a four closest lights lighting model for lighting a mesh, that I could used the same data from that to decide which direction the shadow should be cast. The first thing I experimented with, was using the closest light to cast a shadow. This of course had the problem of a shadow suddenly jumping around when the closest light changed.

I realized I needed to do something akin to what I was doing with lighting, and blend the closest lights all together. To create one faux light source, and cast the shadow from there. This was accomplished by averaging out each light source according to the same value used for the intensity of the light in the lighting calculation. This seemed to work out okay, and prevented the shadow from jumping around, except when then entity completely walked out of any light sources, then the shadow would still be cast, but in a direction that wasn’t important. This was just as bad as the shadow jumping around.

I finally decided that for the shadows to look more natural, I would need to change the darkness of the shadow depending on the overall intensity of the light. That way if an entity was far away from a light source, the shadow would be dim, and if it was out of range it wouldn’t even cast a shadow. This required some reworking as to how I was rendering shadows, as each shadow could now have it’s own intensity. Ultimately everything work out, and it looks fairly natural.

Obviously, shadows don’t get less intense in real life, but consider real life. If a single light source is blocked off, a sharp shadow is cast, but then as you move further away from the light source, the shadow does appear to get lighter, this is because there is more light to bounce around the various reflective surfaces in an environment, so ultimately the fading shadow method isn’t too unbelievable.

Click on the screenshots below to enlarge them.





You might notice that the above shadows don’t really look like shadows of the Hell Trooper, it’s because they’re not, they’re shadows of this guy. He’s the shadow mesh. In a production game you’d want a more accurate representation.


Categories: Development

Reflective Objects and Lights - July 25, 2011

Reflections

So my last post was about the fact that mirrors had been implemented into the game, and now I have actually implemented the ability to create a reflective object. In the entity definition file there is a mtree tag. The mtree tag is used to define information about the entity’s mesh tree structure. I added an attribute to the mtree tag entitled flags. The flags attribute can set various flags for rasterization. Those flags include unlit, reflective, noshadow, refractive, and transparent. The unlit flag insures that lighting calculations are not performed when the entity is rendered, noshadow will signify that the entity does not cast a shadow (once shadows have been fully implemented), refractive is currently unusued, but if I ever find the need to create an entity that refracts the scene (as opposed to reflecting it, this will signify that the entity does that), transparent signifies that the entity is transparent, and so it should potentially be rendered last to insure that other entities can be seen behind it. The actually flags attribute can contain any combination of these flags. (e.g. <mtree ... flags=unlit|noshadow>)

The most relevant to what I’ve implemented today, however, is the reflective flag. This signifies that the entity is reflective. Now, when an entity is flagged as reflective, if it is visible it is queued into a special reflective objects queue. Then the scene gets drawn as follows. All visible entities and map geometry are calculated. If any reflective entities were calculated they are rendered as described in the previous post (the visibility information of the reflected scene is stored in a different data structure than the visibility information of the actual scene). Finally the actual scene is rendered minus the reflective entities (which were rendered in the previous step).

The results can be seenin this image:



Reflective objects work in the following way. They reflect about the object’s XY plane, so the mesh must be setup appropriately to insure that the reflection looks correct. The object must have some kind of transparent texture where the mirror is, otherwise the reflection will be written over. Reflective objects cannot be transparent. (Because of the order that rendering occurs, they are drawn before anything else, and so any objects behind them will be discarded by the z-buffer.)

Along with that, the following limitations exist for reflective objects. Only a limited number of reflections may exist in one scene. Currently the number is hard-coded to 2. This is to insure that a large number of reflections are not present since they are computationally expensive. Reflective objects to not reflect other reflective objects. Certainly it would be possible to do this, but again it would be computationally expensive, I could create a depth limit as to how many reflections could be reflected, but in reality my current plans for the game do not include any need to do this. Also, I will most certainly implement some way for a reflective object to be reflected, only using a default texture for the mirror potion of the object.

Some buggy things that exist, that I will look into further. Currently if you look at a reflective object through one portal the reflection seems to be okay, but if you look at it through two portals the reflection sometimes seems to ignore portals further down the line. This seems to be hit or miss, so I will have to test further. Also, in some cases objects behind the reflection are not being properly clipped. This occurred more in the development phase and seems to have gone away. This seemed to be a driver problem, as I extensively test with clip planes and found that they did or did not work in certain circumstances. Again, this will need further testing.

Lighting

Now I want to move onto the other topic which I’ve been working on, and that is lighting. A map may contain multiple lights, but obviously an entity isn’t going to be lit by every light on the map, that would be computationally expensive, and in reality and entity would be out of range of most lights in a map anyway. I had previously decided that each entity would be lit by the four closest lights to it, and so before an entity is rendered the four closest lights to it are computed. I actually discovered a bug in this computation, and repaired it, but besides that, I basically had the entity being lit equally by each light, even if one light was further away than another.

I had expiramented with different ways of getting the light to diminish over distance, but I finally settled on using a quadratic falloff equation. Each light has a range, and as an entity get’s further and further from the light, the intensity of that light decreases quadratically (or inverse quadratically as it is actually a square root). This creates the effect that the light intensity is fairly constant near the light source, but then right as it reaches the maximal range it quickly drops off to zero. The intensity is set to zero if the entity is out of range. This creates a much smoother transition as an entity moves around a map. In fact when the map is properly lit as the entity walks around it, it is barely noticeable that the lights being used to light it are different lights, when the lights change.

The new lighting is especially dramatic in the Sorpigal level (the second map seen in the 2nd tech demo).


Categories: Development

Mirrors! - July 16, 2011

Wow, two posts in one day, but I couldn’t help myself. I was going to stop coding for the day to play some video games, but I started reading about stencil buffers, and I couldn’t help but experiment with creating a reflection effect. I’d experimented with reflections before by rendering to a cube map in real time, which basically created those ugly mirrors that you see in a lot of games (most recently I saw them in Duke Nukem Forever). Because I’d done that I’d already had code to render the scene from any angle, but I needed something that looked sharper than the cube texture. So I learned how to use the stencil buffer to create a reflection effect. Check it out.



Currently the single mirror seen in the above screenshot is hard coded, so all that is left is to devise a method for signifying that a mirror is present in the game assets. Because of the way I’ve implement the mirror code, it is certainly possible to create a mirror that can move around in real time, but I would also like the ability to have static geometry become a mirror.

Mirrors are done in a typical full scene reflection method. A stencil buffer masks out where the mirror is to be rendered, a reflection is rendered in the stencil, the mirror is then rendered (in the case above the mirror has a simple red shade to distinguish it, but it could be any transparent, and even lit, texture), finally the scene is rendered from the camera perspective. So technically it is four passes to render a reflection, but only two of those are computationally expensive. The effect is much sharper than the render-to-texture method that has been seen in so many games.


Categories: Development

Feature Updates - July 16, 2011

With the engine coming along nicely, I have now been working on graphic updates. Vertex and Pixel shaders are more strongly implemented in the game, and I have written another internal HLSL file <lightf.fx> that has some typical lighting calculation functions as well as a vertex and pixel shader that implements those lightings. I did this because more and more I see reasons to strictly use vertex shaders for all rendering. I’ve been working with vertex blending (or vertex skinning) within the vertex shader instead of in software. Vertex blending is the process of animating a mesh, and because of the way vertex shaders work, it seems like the best result will be to do everything in a vertex shader. Previously a lot of effects could be accomplished using the standard Direct3D render states and pipeline, but this is seeming more and more obsolete. I also added the material to the shader model. Most of the lighting code is based upon Lengyel’s Mathematics for 3D Game Programming book, but with the emissive and ambient components implemented similar to the way Direct3D does it by default.

I also fixed some major problems with the way the server communicated with the client that should prevent some thread stalls that were occuring. This also solved a problem where if the client ran too fast, most of the messages to the server would be skipped. This made it so the game can now be run without vsync enabled. Also, the way input was sent to the server has been changed, so that it is now concatenated until the server retrieves it. Speaking of which, what I really want to do is run a second physics simulation on the client that only computes the behavior of the client’s entity. Then sends that behavior to the server, the server running some kind of predictor corrector method. This would eliminate the need of sending input to the server at all, as that would only be processed on the client side. It should allow a faster response between the client and movement of the entity it is controlling. Albeit, the lag between controls and response is hardly noticeable when the server and the client are on the same machine, but when I actually begin to implement networking this is probably an important thing to do.

In other news I finally got around to clearing up all the memory leaks that were occurring in the game. I say finally because they were bugging me for a while now, and though the leaks were small, I felt that they had the potential to blow up. Most of the leaks were caused by declaring static objects that allocated memory in their constructors. A few were caused by failing to deallocate memory in some other capacity.

Overall I’m very happy with how the game is coming along. I’ve also begun to develop a map for an actual game. I won’t announce what the game is simple because it is a really simple idea, and it is a really good idea, and I don’t want anyone to steal the idea. While Ideally I’d like to work on a Deus Ex clone, an adventure game, or something else with a lot of story, I just don’t have the resources to do so, so I’m going to be focusing on developing a more casual type game. Something that I can probably develop on my own. The biggest problem there is that I have to really learn how to do some decent art, and not all this programmer art that I have been doing.

Most of the work I plan on doing on the engine will be in the graphics department. I want to implement dot3 bump mapping, perfect the shadows, and add real time reflections. Other than that, there is still a bug where the game will crash if client tries to connect to a server where no map or entity definition file has been loaded. I know exactly what is causing this, but it involves work on both the server and client so I haven’t dealt with it yet.


Categories: Development

1 2 4 5 ...6 ...7 8

Search

This blog chronicles the development of the Emergence Game engine. The Emergence Game Engine will be a fully functional 3D game engine with support for the latest technologies in video game development. This blog features the remarks of the lead programmer, Blaine Myers, as he comments on the struggles and joys of developing a 3D game engine.

Categories

Recent Posts

  XML Feeds

CMS + user community

Rough Concept Skin by Beem SoftwareMultiple blogs solution