Irrlicht 3d Mesh Formats For Essays

A tutorial by geoff.

In this tutorial we'll learn how to create custom meshes and deal with them with Irrlicht. We'll create an interesting heightmap with some lighting effects. With keys 1,2,3 you can choose a different mesh layout, which is put into the mesh buffers as desired. All positions, normals, etc. are updated accordingly.

Ok, let's start with the headers (I think there's nothing to say about it)

#include <irrlicht.h>#include "driverChoice.h"#ifdef _MSC_VER#pragma comment(lib, "Irrlicht.lib")#endifusing namespace irr; using namespace video; using namespace core; using namespace scene; using namespace io; using namespace gui;

This is the type of the functions which work out the colour.

typedef SColor colour_func(f32 x, f32 y, f32 z);

Here comes a set of functions which can be used for coloring the nodes while creating the mesh.

SColor grey(f32, f32, f32 z) { u32 n = (u32)(255.f * z); return SColor(255, n, n, n); } SColor yellow(f32 x, f32 y, f32) { return SColor(255, 128 + (u32)(127.f * x), 128 + (u32)(127.f * y), 255); } SColor white(f32, f32, f32) { return SColor(255, 255, 255, 255); }

The type of the functions which generate the heightmap. x and y range between -0.5 and 0.5, and s is the scale of the heightmap.

typedeff32 generate_func(s16 x, s16 y, f32 s); f32 eggbox(s16 x, s16 y, f32 s) { constf32 r = 4.f*sqrtf((f32)(x*x + y*y))/s; constf32 z = expf(-r * 2) * (cosf(0.2f * x) + cosf(0.2f * y)); return 0.25f+0.25f*z; } f32 moresine(s16 x, s16 y, f32 s) { constf32 xx=0.3f*(f32)x/s; constf32 yy=12*y/s; constf32 z = sinf(xx*xx+yy)*sinf(xx+yy*yy); return 0.25f + 0.25f * z; } f32 justexp(s16 x, s16 y, f32 s) { constf32 xx=6*x/s; constf32 yy=6*y/s; constf32 z = (xx*xx+yy*yy); return 0.3f*z*cosf(xx*yy); }

A simple class for representing heightmaps. Most of this should be obvious.

class HeightMap { private: constu16 Width; constu16 Height; f32 s; core::array<f32> data; public: HeightMap(u16 _w, u16 _h) : Width(_w), Height(_h), s(0.f), data(0) { s = sqrtf((f32)(Width * Width + Height * Height)); data.set_used(Width * Height); } void generate(generate_func f) { u32 i=0; for(u16 y = 0; y < Height; ++y) for(u16 x = 0; x < Width; ++x) set(i++, calc(f, x, y)); } u16 height() const { return Height; } u16 width() const { return Width; } f32 calc(generate_func f, u16 x, u16 y) const{ constf32 xx = (f32)x - Width*0.5f; constf32 yy = (f32)y - Height*0.5f; return f((u16)xx, (u16)yy, s); } voidset(u16 x, u16 y, f32 z) { data[y * Width + x] = z; } voidset(u32 i, f32 z) { data[i] = z; } f32get(u16 x, u16 y) const { return data[y * Width + x]; }

The only difficult part. This considers the normal at (x, y) to be the cross product of the vectors between the adjacent points in the horizontal and vertical directions.

s is a scaling factor, which is necessary if the height units are different from the coordinate units; for example, if your map has heights in metres and the coordinates are in units of a kilometer.

vector3df getnormal(u16 x, u16 y, f32 s) const{ constf32 zc = get(x, y); f32 zl, zr, zu, zd; if (x == 0) { zr = get(x + 1, y); zl = zc + zc - zr; } elseif (x == Width - 1) { zl = get(x - 1, y); zr = zc + zc - zl; } else { zr = get(x + 1, y); zl = get(x - 1, y); } if (y == 0) { zd = get(x, y + 1); zu = zc + zc - zd; } elseif (y == Height - 1) { zu = get(x, y - 1); zd = zc + zc - zu; } else { zd = get(x, y + 1); zu = get(x, y - 1); } returnvector3df(s * 2 * (zl - zr), 4, s * 2 * (zd - zu)).normalize(); } };

A class which generates a mesh from a heightmap.

class TMesh { private: u16 Width; u16 Height; f32 Scale; public: SMesh* Mesh; TMesh() : Mesh(0), Width(0), Height(0), Scale(1.f) { Mesh = new SMesh(); } ~TMesh() { Mesh->drop(); } void init(const HeightMap &hm, f32 scale, colour_func cf, IVideoDriver *driver) { Scale = scale; constu32 mp = driver -> getMaximalPrimitiveCount(); Width = hm.width(); Height = hm.height(); constu32 sw = mp / (6 * Height); u32 i=0; for(u32 y0 = 0; y0 < Height; y0 += sw) { u16 y1 = y0 + sw; if (y1 >= Height) y1 = Height - 1; addstrip(hm, cf, y0, y1, i); ++i; } if (i<Mesh->getMeshBufferCount()) { for (u32 j=i; j<Mesh->getMeshBufferCount(); ++j) { Mesh->getMeshBuffer(j)->drop(); } Mesh->MeshBuffers.erase(i,Mesh->getMeshBufferCount()-i); } Mesh->setDirty(); Mesh->recalculateBoundingBox(); } void addstrip(const HeightMap &hm, colour_func cf, u16 y0, u16 y1, u32 bufNum) { SMeshBuffer *buf = 0; if (bufNum<Mesh->getMeshBufferCount()) { buf = (SMeshBuffer*)Mesh->getMeshBuffer(bufNum); } else { buf = newSMeshBuffer(); Mesh->addMeshBuffer(buf); buf->drop(); } buf->Vertices.set_used((1 + y1 - y0) * Width); u32 i=0; for (u16 y = y0; y <= y1; ++y) { for (u16 x = 0; x < Width; ++x) { constf32 z = hm.get(x, y); constf32 xx = (f32)x/(f32)Width; constf32 yy = (f32)y/(f32)Height; S3DVertex& v = buf->Vertices[i++]; v.Pos.set(x, Scale * z, y); v.Normal.set(hm.getnormal(x, y, Scale)); v.Color=cf(xx, yy, z); v.TCoords.set(xx, yy); } } buf->Indices.set_used(6 * (Width - 1) * (y1 - y0)); i=0; for(u16 y = y0; y < y1; ++y) { for(u16 x = 0; x < Width - 1; ++x) { constu16 n = (y-y0) * Width + x; buf->Indices[i]=n; buf->Indices[++i]=n + Width; buf->Indices[++i]=n + Width + 1; buf->Indices[++i]=n + Width + 1; buf->Indices[++i]=n + 1; buf->Indices[++i]=n; ++i; } } buf->recalculateBoundingBox(); } };

Our event receiver implementation, taken from tutorial 4.

class MyEventReceiver : public IEventReceiver { public: virtualbool OnEvent(const SEvent& event) { if (event.EventType == irr::EET_KEY_INPUT_EVENT) KeyIsDown[event.KeyInput.Key] = event.KeyInput.PressedDown; returnfalse; } virtualbool IsKeyDown(EKEY_CODE keyCode) const{ return KeyIsDown[keyCode]; } MyEventReceiver() { for (u32 i=0; i<KEY_KEY_CODES_COUNT; ++i) KeyIsDown[i] = false; } private: bool KeyIsDown[KEY_KEY_CODES_COUNT]; };

Much of this is code taken from some of the examples. We merely set up a mesh from a heightmap, light it with a moving light, and allow the user to navigate around it.

int main(int argc, char* argv[]) { video::E_DRIVER_TYPE driverType=driverChoiceConsole(); if (driverType==video::EDT_COUNT) return 1; MyEventReceiver receiver; IrrlichtDevice* device = createDevice(driverType, core::dimension2du(800, 600), 32, false, false, false, &receiver); if(device == 0) return 1; IVideoDriver *driver = device->getVideoDriver(); ISceneManager *smgr = device->getSceneManager(); device->setWindowCaption(L"Irrlicht Example for SMesh usage.");

Create the custom mesh and initialize with a heightmap

TMesh mesh; HeightMap hm = HeightMap(255, 255); hm.generate(eggbox); mesh.init(hm, 50.f, grey, driver); IMeshSceneNode* meshnode = smgr -> addMeshSceneNode(mesh.Mesh); meshnode->setMaterialFlag(video::EMF_BACK_FACE_CULLING, false); ILightSceneNode *node = smgr->addLightSceneNode(0, vector3df(0,100,0), SColorf(1.0f, 0.6f, 0.7f, 1.0f), 500.0f); if (node) { node->getLightData().Attenuation.set(0.f, 1.f/500.f, 0.f); ISceneNodeAnimator* anim = smgr->createFlyCircleAnimator(vector3df(0,150,0),250.0f); if (anim) { node->addAnimator(anim); anim->drop(); } } ICameraSceneNode* camera = smgr->addCameraSceneNodeFPS(); if (camera) { camera->setPosition(vector3df(-20.f, 150.f, -20.f)); camera->setTarget(vector3df(200.f, -80.f, 150.f)); camera->setFarValue(20000.0f); }

Just a usual render loop with event handling. The custom mesh is a usual part of the scene graph which gets rendered by drawAll.

while(device->run()) { if(!device->isWindowActive()) { device->sleep(100); continue; } if(receiver.IsKeyDown(irr::KEY_KEY_W)) { meshnode->setMaterialFlag(video::EMF_WIREFRAME, !meshnode->getMaterial(0).Wireframe); } elseif(receiver.IsKeyDown(irr::KEY_KEY_1)) { hm.generate(eggbox); mesh.init(hm, 50.f, grey, driver); } elseif(receiver.IsKeyDown(irr::KEY_KEY_2)) { hm.generate(moresine); mesh.init(hm, 50.f, yellow, driver); } elseif(receiver.IsKeyDown(irr::KEY_KEY_3)) { hm.generate(justexp); mesh.init(hm, 50.f, yellow, driver); } driver->beginScene(true, true, SColor(0xff000000)); smgr->drawAll(); driver->endScene(); } device->drop(); return 0; }

That's it! Just compile and play around with the program.

The Irrlicht Engine is a cross-platform high performance realtime 3D engine written in C++. It features a powerful high level API for creating complete 3D and 2D applications such as games or scientific visualizations. It comes with an excellent documentation and integrates all state-of-the-art features for visual representation such as dynamic shadows, particle systems, character animation, indoor and outdoor technology, and collision detection. All this is accessible through a well designed C++ interface, which is extremely easy to use.

Its main features are:
  • High performance realtime 3D rendering using Direct3D and OpenGL[more]
  • Platform independent. Runs on Windows, Linux, OSX, Solaris, and others.[more]
  • Huge built-in and extensible material library with vertex, pixel, and geometry shader support [more].
  • Seamless indoor and outdoor mixing through highly customizeable scene management. [more]
  • Character animation system with skeletal and morph target animation. [more]
  • Particle effects, billboards, light maps, environment mapping, stencil buffer shadows, and lots of other special effects. [more]
  • Several language bindings which make the engine available to other languages such as C#, VisualBasic, Delphi, Java …
  • Two platform and driver independent fast software renderers included. They have different properties (speed vs. quality) and feature everything needed: perspective correct texture mapping, bilinear filtering, sub pixel correctness, z-buffer, gouraud shading, alpha-blending and transparency, fast 2D drawing, and more.
  • Powerful, customizeable, and easy to use 2D GUI System with Buttons, Lists, Edit boxes, …
  • 2D drawing functions like alpha blending, color key based blitting, font drawing, and mixing 3D with 2D graphics.
  • Clean, easy to understand, and well documented API with lots of examples and tutorials.
  • Written in pure C++ and totally object oriented.
  • Direct import of common mesh file formats: Maya (.obj), 3DStudio (.3ds), COLLADA (.dae), Blitz3D (.b3d), Milkshape (.ms3d), Quake 3 levels (.bsp), Quake2 models (.md2), Microsoft DirectX (.X)… [more]
  • Direct import of Textures: Windows Bitmap (.bmp), Portable Network Graphics (.png), Adobe Photoshop (.psd), JPEG File Interchange Format (.jpg), Truevision Targa (.tga), ZSoft Painbrush (.pcx)… [more]
  • Fast and easy collision detection and response.
  • Optimized fast 3D math and container template libraries.
  • Directly reading from (compressed) archives. (.zip, .pak, .pk3, .npk)
  • Integrated fast XML parser.
  • Unicode support for easy localisation.
  • Works with Microsoft VisualStudio, Metrowerks Codewarrior, Bloodshed Dev-C++, Code::Blocks, XCode, and gcc 3.x-4.x.
  • The engine is open source and totally free. You can debug it, fix bugs and even change things you do not like. And you do not have to publish your changes: The engine is licensed under the zlib licence, not the GPL or the LGPL.

Special effects

There are lots of common special effects available in the Irrlicht Engine. They are not difficult to use, in most cases the programmer only has to switch them on. The engine is constantly extended with new effects, here is list of effects which are currently implemented:
  • Animated water surfaces
  • Dynamic lights
  • Dynamic shadows using the stencil buffer
  • Geo mip-mapped terrain
  • Billboards
  • Bump mapping
  • Parallax mapping
  • Transparent objects
  • Light maps
  • Customizeable Particle systems for snow, smoke, fire, …
  • Sphere mapping
  • Texture animation
  • Skyboxes and skydomes
  • Fog
  • Volume Light

Drivers

The Irrlicht Engine supports 5 rendering APIs, which are 4 more than most other 3D engines do:
  • Direct3D 9.0
  • OpenGL 1.2-4.x
  • The Irrlicht Engine software renderer.
  • The Burningsvideo Software Renderer
  • A null device.

When using the Irrlicht engine, the programmer needs not know, which API the engine is using, it is totally abstracted. He only needs to tell the engine which API the engine should prefer.
There are three reasons why the engine not only focuses on one API:

  • Performance. Some graphic adapters are optimized for OpenGL, some simply run faster with Direct3D.
  • Platform independence. Direct3D will not be present on a Mac or a Linux Workstation, while maybe OpenGL will. And when OpenGL is not available either, there are still the Irrlicht Engine software renderers, which will always operate on any platform. In this way, the user will see something on the screen for sure.
  • Driver problems, which are a common problem a user encounters when using 3D software. There are thousands of hardware configurations out there, and often games and 3D applications are crashing, because there is an old driver installed. Letting the user switch the driver might solve the problem.

Materials and Shaders

To be able to create realistic environments quickly, there are lots of common built in materials available in the engine. Some materials are based on the fixed function pipeline (light mapped geometry for example) and some are relying on the programmable pipeline (normal mapped/parallax per pixel lighted materials for example) todays 3d hardware is offering. It is possible to mix these types of materials in a scene without problems and when using a material which needs features the hardware is not able to do, the engine provides fall back materials. However, if the built in materials are not enough, it is possible to add new materials to Irrlicht at runtime, without the need of modifying/recompiling the engine. Currently supported shader languages for this are:
  • Pixel and Vertex Shaders 1.1 to 3.0
  • ARB Fragment and Vertex Programs
  • HLSL
  • GLSL

Platforms

The Irrlicht Engine is platform independent, currently there is official support for:
  • XP, XP64, Vista, CE, Windows 7,Windows 8 and Windows 10
  • Linux
  • OSX
  • Sun Solaris/SPARC
  • All platforms using SDL

For the serious mobile developer there are work-in-progress OpenGL ES drivers, which have enabled the Irrlicht community to develop iPhone, Android and Nokia Symbian ports. The Engine works with all supported platforms in the same way. The programmer only has to write the game/application code once, and it will run on all supported platforms without the need to change one single line of code.

Scene Management

Rendering in the Irrlicht Engine is done using a hierarchical scene graph. Scene nodes are attached to each other and follow each others movements, cull their children to the viewing frustum, and are able to do collision detection. A scene node can for example be a camera, an indoor or outdoor level, a skeletal animated character, animated water, a geomipmap terrain, or something completely different.In this way, the Irrlicht engine can seamlessly mix indoor and outdoor scenes together, gives the programmer full control over anything which is going on in the scene. It is very easily extensible because the programmer is able to add his own scene nodes, meshes, texture loaders, GUI elements, and so on.The geometry creator gives easy access to simple gemetrical bodies, such as cylinder, cube, etc. Objects can be rendered as polygons, wireframe, or points, using triangles, lines, point and point sprite primitives.

Character Animation

Currently there are two types of character animation implemented:
  • Morph target animation: Meshes are linearly interpolated from one frame to the next. This is what is done in the Quake game series, and how the Irrlicht engine does it when importing .md2 and .md3 files.
  • Skeletal animation: A skin is manipulated by animated joints. The Irrlicht Engine will do this when loading .ms3d, .x, and .b3d files. It is easily possible to attach objects to parts of the animated model. It is, e.g., possible to attach a weapon to the hand of a model, which will be moved as the hand moves, with only one line of code.

The programmer doesn’t need to know about all this, if he doesn’t want to. All he has to do is to load the files into the engine and let it animate and draw them.

Supported Formats

Lots of common file formats are supported, and are able to be loaded (and partially also saved) directly from within the engine. In this way, you don’t need to convert your media before using it with the Irrlicht Engine, saving development time. The internal ressource management of Irrlicht provides simple access to all file formats and takes care of fetching already loaded meshes or textures from its own cache instead from disk if the data was already loaded before. The list of all supported formats is constantly growing, and there are many community-made plugins which add new formats without having to recompile the engine.If you need the Irrlicht Engine to be able to load a file format which it cannot currently handle, simply ask for it by raising a feature request on our issue tracker.Currently supported textures file formats:
  • JPEG File Interchange Format (.jpg, r/w)
  • Portable Network Graphics (.png, r/w)
  • Truevision Targa (.tga, r/w)
  • Windows Bitmap (.bmp, r/w)
  • Zsoft Paintbrush (.pcx, r/w)
  • Portable Pixmaps (.ppm, r/w)
  • Adobe Photoshop (.psd, r)
  • Quake 2 textures (.wal, r)
  • SGI truecolor textures (.rgb, r)

Currently supported mesh file formats:

Animated objects:

  • B3D files (.b3d, r, skeleton)
  • Microsoft DirectX (.x, r) (binary & text, skeleton)
  • Milkshape (.ms3d, r, skeleton)
  • Quake 3 models (.md3, r, morph)
  • Quake 2 models (.md2, r, morph)

Static objects:

  • Irrlicht scenes (.irr, r/w)
  • Irrlicht static meshes (.irrmesh, r/w)
  • 3D Studio meshes (.3ds, r)
  • Alias Wavefront Maya (.obj, r/w)
  • Lightwave Objects (.lwo, r)
  • COLLADA 1.4 (.xml, .dae, r/w)
  • OGRE meshes (.mesh, r)
  • My3DTools 3 (.my3D, r)
  • Pulsar LMTools (.lmts, r)
  • Quake 3 levels (.bsp, r)
  • DeleD (.dmf, r)
  • FSRad oct (.oct, r)
  • Cartography shop 4 (.csm, r)
  • STL 3D files (.stl, r/w)
  • PLY 3D files (.ply, r/w)

Supported Render Features

Irrlicht supports all general render features needed for high quality rendering of materials and effects. Besides a few exceptions, all features are supported in all hardware accelerated APIs, and some are supported by the software renderers as well. The following list contains all supported render features of the current version of the Irrlicht Engine. Please ask for extending this set to certain features required.

Currently supported render features:

  • Predefined materials
    • Solid
    • Solid with alpha blending 2nd texture
    • Light maps with configurable pre-multiplication and additional dynamic light support
    • Detail map
    • Sphere map
    • Environment reflection
    • Transparency by adding the texture
    • Transparency by using the texture alpha
    • Transparency by using the texture alpha without blending
    • Transparency by using the vertex alpha
    • Normal maps
    • Parallax maps
    • Flexible blend mode rendering
  • Material Colors (ambient, diffuse, emissive, specular)
  • Shininess
  • Line thickness (only OpenGL)
  • ZBuffer write/test modes
  • Per mesh anti-aliasing settings
  • Alpha to Coverage
  • Color masking
  • Vertex colors with configurable interpretation
  • Wireframe/Point cloud rendering
  • Gouraud/Flat shading
  • Lighting mode configurable
  • Backface/Frontface culling
  • Fog enabling per mesh
  • Automatic normals normalization
  • Texture coordinates repeat/clamp modes
  • Per texture filtering (bilinear, trilinear, anisotropic)
  • Texture LOD Bias
  • Texture matrices
  • Arbitrary number of multi-textures

0 thoughts on “Irrlicht 3d Mesh Formats For Essays

Leave a Reply

Your email address will not be published. Required fields are marked *