Zorn

Technical design

This document is supposed to give an overview of the Zorn game from a developer's point of view. I've been working on this on and off for some time; time to put it on paper, so to speak.

This document was last changed (in any meaningful way) 2005-09-01.

Overview

Zorn is a single or networked multiplayer, realtime, 2D graphical action game in Python. It uses OpenGL for graphics, the SDL for input, window management and (so far) audio. Everything else (networking) is done with standard Python libraries.

Engines and bridges

Most of the work is performed by engines. Engines are basically stand-alone modules designed to perform one task or one area. Engines are also the parts of the project which could most easily be extracted and used by other projects.

The communication between engines is performed by bridges. Bridges are basically non-standalone modules. They need to know the design of the engines they communicate between, and are therefore not much use without them. On the other hand, the engines themselves do not need to know what the other engines do inside, or even that other engines exist.

The following engines are planned:

Each of these engines is described in more detail below.

Multithreading

As far as possible, I would like to avoid multithreading or multiprocessing. Each engine should be actively "pumped" using an explicit method call. When this method returns, the engine stops working. This may not be the most efficient setup, but it should do wonders for code readability and maintainability.

There may be one exception: a player playing and hosting a multiplayer game might benefit from the server part being in a seperate thread (if not seperate process). The most significant change this should bring is that the network engine has to be thread-safe for this to work.

The network engine

The network engine's job is to pass messages from the client to the server. For now, see the network design text.

The input engine

The input engine's main purpose is to provide a transparent, flexible and configurable mapping from inputs, such as keyboard, mouse and joystick inputs, to controls, such as throttle, steering, and fire buttons.

The input engine is currently under heavy design. Currently, I've settled on four control types:

Unsigned controls go from 0 to 1, signed ones go from -1 to 1. Discrete controls can only take the values 0, 1 or -1, while continuous ones can take anything inbetween. Note: It should be possible to map every mentioned type of control to a signed continuous one, so the above partition does not need to make sense. "Illegal" values should be clamped or rounded if ambiguities arise.

On the client side, the input engine should aggregate the individual input events and maintain an control state. Any changes in this control state are transmitted in one pass (as permissable given bandwidth constraints) to the server, who maintains a control state of all clients. The game engine (on the server side) interprets this control state each (server) step. The client flags a changed control as "dirty", i.e. changed, and clears this flag upon sending. (Improvement: keep the last transmitted state in memory, and send only if the state is different.)

However, this will break controls which rely on a "triggering" behaviour, where the time, count and order of control changes is relevant. For example, consider a weapon that charges while the weapon control (an unsigned discrete control) is held (1) and fires when the weapon control is released (0). If the player releases and holds this control continuously between input engine steps, the server will assume (as per architecture design) that the control is not changing, and the weapon will charge without firing. However, the player's intent in this case was to cause the weapon to fire with only minimal charging.

To accomodate this case, the client can mark controls as triggered. A triggered control does not have a "dirty" flag / "last sent" value. Instead, all triggered controls add their control change events to a one-per-client event list, the triggered list. This list is sent to the server as a whole, before the non-triggered controls are handled as above. This preserves at least the count and order of control events. Timing information is not kept, unless the control events have associated time information (currently not planned).

Triggering has no effect on the input-to-network protocol. In either case (triggered or non-triggered), a control change message is sent from the client to the server. In the case of triggered inputs, this may cause more than one state change per control to be sent. The server receives these in the same fashion; whether it treats them differently (merely updating the control state or actually acting on the event) is up to the server. Setting a control non-triggered is useful for cutting down on the network traffic used. When in doubt, set all controls triggered. Currently, it is planned to set all discrete controls triggered and all continuous controls untriggered by default, though this may change!

The engine differentiates between three modes of operation: the game mode, the text input mode, and the menu navigation mode. The keyboard, in particular, assigns different semantics to keys in each of the three modes. The way to switch between modes is, again, defined in the input engine. However, the engine will not switch automatically:

Since the input engine also gets events that go beyond control state change (the most important being the SDL quit message), the engine cannot take care of all events in a single loop. Therefore, it only accumulates as many events as it can. When it hits an event it can't handle itself, it returns this event (in a wrapper, perhaps) to the caller. If no events are available, it returns None. The state and change list can be queried or cleared seperately.

The graphics engine

This section is preliminary.

The graphics engine is responsible for the on-screen display of the scene. The scene is always 2D, and always seen from above (though this can be changed, e.g. tilted forwards, all drawing routines assume an orthogonal top-down aproach).

The graphics engine keeps track of several graphics objects. These contain their positions and instructions to draw them on the screen. For now, this should not include drawing callback functions; more likely, this will be a texture id, texture coordinates, and a box to be drawn.

The graphics engine uses a camera, which (usually) tracks one graphics object. This is typically the player. The camera is not required to track the object's movements perfectly, for example some lag may be desirable, or the camera should be focussed ahead of a moving object to provide more view of the object's path. The scale of the camera should be just about freely selectable by the player.

The graphics engine is probably also the one to draw the GUI, showing player status, chat messages, and a radar. Many graphics objects will also have a radar representation.

The font engine

This section is preliminary.

The font engine is a subengine of the graphics engine. Its only responsibility is to provide the graphics engine with a robust yet flexible way to draw fonts.

By using the FreeType library through the PyGame implementation of the SDL_ttf library, the font engine can use just about any TTF font the player desires and posesses.

The font engine, as currently planned, uses the FreeType library to create bitmaps of the font in a specific size, then saves these as OpenGL textures. These textures are full-white luminosity textures, with the font shape in the alpha channel. This allows both antialiasing and coloring of the text, while using less memory. Unfortunately, an alpha-only texture defaults to fully black, most likely to aid shadow textures, so a two-channel texture must be used.

In the current implementation, each glyph (character) has its own texture to avoid bleeding at the edges when scaling is used. However, it does not look like significant scaling is needed, so perhaps the glyphs can be grouped after all.

One of the open problems is choosing the font size. Since OpenGL makes the graphics very resolution independent, the size of the text on the screen should be choosable independently as well. This would suggest calculating the pixel size of the glyphs at the current resolution, then selecting the font at that size for the clearest picture. However, this approach has two drawbacks:

  1. The font size is given in points, not in pixels. This makes it difficult (impossible?) to just calculate the needed size.
  2. The font size is an integer. It may be necessary to choose between native resolution and strict adherence to resolution independence.

The sound engine

This section is preliminary.

Not much thought has gone into the sound engine yet. If PyOpenAL (OpenAL bindings for Python) become mature, stable and widespread, OpenAL might be used to take advantage of distance attenuation. Otherwise, a simple effects library like SDL_Mixer (in PyGame) should suffice.

Sound is a secondary consideration.

The physics engine

This section is preliminary.

The physics engine's task is the low-level handling of physics objects. Low-level in this context means updating positions according to velocities and, at the most, collision detection.

Basically, a physics object has a position, an orientation, a velocity, an accelleration in the form of a force vector, and a friction coefficient. Whether the friction is calculated by the client remains to be seen.

The server always has the first and the final word on a physics object. It has its own, global copy of the physics engine. The game engine calculates the force vectors the physics objects would like to assume. The physics engine is then responsible for updating the physics objects, taking into accuont the position, orientation (force vectors do not depend on the orientation, at least as far as the physics engine is concerned), accelleration and friction of each object.

The server passes the current state of (applicable) physics objects to the player. This typically includes the player's own physics object. The client contains two physics engines: one is called the hidden physics engine and the other the visible physics engine. The hidden engine endeavours to be a faithful copy of the server's engine. Any changes communicated from the server instantly override the state of the hidden engine. The visible engine is the one the client actually "sees", for the most part. The physics-to-graphics bridge translates the visible engine to the graphics engine. Both hidden and visible engines update their states according to the physics rules. Additionally, the visible engine's state is slowly adapted to the hidden engine's state. This ensures that no sudden jumps or jerks are communicated to the player.

The client's engines should probably not calculate collisions. The client cannot be sure what the effect of a collision is going to be.

Further extensions (not needed for Zorn) could add different flavors of physics objects, such as cars, which have their own physical rules and limitations.

The game engine

This section is preliminary.

The game engine is server-side only. It governs the behaviour (usually in the sense of force vectors) of the physics objects in the physics layer. It is also responsible for handling (not detecting!) collisions, spawning players, enemies and projectiles, adjusting player values (score, health) and starting/ending the game. Not much consideration is given to the game engine at this point.

Ben Deutsch