Here you will find both the basic documentation for MAGE. If you are looking for API documentation, please have a look at the API reference.
Questions and issues
If you have a question which is not covered by the current documentation:
- Open an issue and request an addition to the documentation
- If you have some time, please make a pull request to enhance the documentation
Introduction
MAGE is a Game Server Framework for Node.js. It allows game developers to quickly create highly interactive games that are performant and scalable.
Why MAGE?
Write interactive games
Even if you are writing a single-player game, you may want to develop features such as ranking and shops, store player information, or send push notifications to your game clients.
MAGE makes this easy by providing both a framework to create RPC calls (called user commands), and by providing a backend API to send asynchronous messages between players.
Write multiplayer games
MAGE’s forte comes down to writing multiplayer games. MAGE is an excellent fit for writing PVP games.
Scalable game servers
One of MAGE’s goal is to make it easy to write game servers that scale.
But scale how? More specifically, MAGE helps you:
- Scale your code; MAGE helps you write code that fits in your head
- Scale your runtime; your game server will run on your laptop, in a large cluster, and on anything in between
- Scale your ops; MAGE provides all the necessary APIs for you to be able to monitor your game server
Rich ecosystem
MAGE provides only the primitives required to write interactive games. However, MAGE comes with a rich ecosystem of official and third-party modules that will help you add features such as static data management, maintenance management, and so on.
Official MAGE modules can be found on GitHub. We also often showcase external modules (and how to use them) on our development blog.
Features
TypeScript & JavaScript
MAGE actively supports both JavaScript and TypeScript projects; you can easily choose in which language you wish to create a project, and many tools support both languages.
Transactional API endpoints
This is the core feature of MAGE; it provides you with a State API that helps you create transactional API endpoints.
Transactional API endpoints (named user commands) will allow you to stack all your data mutations and asynchronous messages on a state object. This state object will either automatically be commited once your API call completes, or rollback if an error occurs.
Multiple storage backends
The data storage API allows you to abstract your storage backend. Not only it will allow you to access and store data to your database of choice, but it will also help you with database migrations whenever needed.
Built-in distributed mode
Thanks to the built-in service discovery system, all you need to do to deploy your game server in a cluster is configure the discovery service.
This means that your messages will always be routed to the right server and correctly forwarded to the right player.
Rich ecosystem of SDKs, modules and tools
MAGE officially provides client SDKs for both HTML5 and Unity games, making it easier than ever to connect your game to your servers.
On top of that, the mage organization on GitHub hosts a wide range of additional tools and modules which can help you with various aspects of your MAGE development pipeline.
Requirements
Node.js
nvm install 8
nvm use 8
# You will need to provide the specific version to install and use
Install-NodeVersion v8.10.0
Set-NodeVersion v8.10.0
Node.js (or Node) is essentially JavaScript for servers, and the MAGE platform has been built on it. There are some concepts you will most likely benefit from understanding before getting started on a serious Node project. Here are some resources which might help you make your first steps on Node:
We recommend using the following Node.js version manager depending on your platform:
NPM
NPM command completion is not available on Windows
npm install -g npm@latest
npm completion >> ~/.bashrc
npm run [tab][tab]
# Will output: archivist:create archivist:drop archivist:migrate develop [...]
npm install -g npm@latest
NPM will ship by default with your Node.js installation. However, since bugs are fixed in later versions, we strongly recommend to periodically update your NPM installation to make sure to get all the fixes.
This project, as well as the projects it generates on bootstrap, use NPM for all its build task. Therefore, we also recommend to set up NPM command completion in your terminal. See https://docs.npmjs.com/cli/completion for more details.
Installation
Naming your environment
You can replace “development” with something else if you want
export NODE_ENV=development
set-item env:NODE_ENV=development
When MAGE creates a new project, it will set up a configuration file for your environment. The name
of your environment is decided by the NODE_ENV
environment variable. If your system administrator
has not already prepared it for you on the system you are developing on, you can do it yourself by
adding the above line your shell’s profile file (.bashrc
, .zshrc
, profile.ps1
, and so on).
The MAGE installer will create a development.yaml
configuration file, and will use that from there on,
whenever you start up the game.
Setting up a new MAGE project
As a JavaScript project
Replace my-gameserver with how you wish to name your game
# Note: use npx, not npm!
npx mage create my-gameserver
cd my-gameserver
# Note: use npx, not npm!
npx mage create my-gameserver
cd my-gameserver
Then use the following command to start your game server (ctrl+c to exit)
npm run develop
npm run develop
Running the following steps is the easiest way to create a new project. Do this from inside an empty folder that is named after the game you are developing.
You can also specify which version you wish to install by adding @[version number]
at
the end of the line.
As a TypeScript project
# Note: use npx, not npm!
npx mage create my-gameserver --typescript
cd my-gameserver
npx mage create my-gameserver --typescript
cd my-gameserver
MAGE can also create TypeScript projects; to do so, all you need to do is to
add the typescript
or ts
flag to the previous command.
Upgrading MAGE in an existing project
In some cases, you may want to
npm run clean
first.
npm install --save mage@1.2.3
npm install --save mage@1.2.3
To upgrade to a new version of MAGE, simply re-run install with the --save
flag,
and specify the version you wish to now use.
Versioning of MAGE
MAGE version numbering follows Semantic Versioning or “semver” logic.
That means that given a version number MAJOR.MINOR.PATCH, we increment the:
- MAJOR version when we make incompatible API changes,
- MINOR version when we add functionality in a backwards-compatible manner, and
- PATCH version when we make backwards-compatible bug fixes.
Working with master (latest) and development
npm install --save mage/mage#master
npm install --save mage/mage#master
You may choose, for the duration of your application development, to work on a pre-release version
of MAGE. To do so, you can use the master
branch when running npm install
.
API Reference
Before you jump into the API documentation, you will most likely want to read through this document so to get familiar with the basics of MAGE.
Client SDKs
The following client SDKs are currently available to connect your game client to a MAGE server:
Name | Language | Location |
---|---|---|
mage-js-sdk | JavaScript (browser) | GitHub |
mage-sdk-unity | C# (For Unity) | GitHub |
Some SDKs may have optional sub-libraries to be installed depending on which MAGE functionality your game will be using.
Configuration
Format
The configuration for your game is allowed to exist in the
YAML format with the .yaml
file extension, the
JSON format with the .json
file extension, or the
JSON5 format with the .json5
file extension.
In a nutshell, YAML is the more human readable format, while JSON is more JavaScript-like in how it represents its variable types. JSON5 represents a good middle-ground between both previous formats.
Location
config/
config
├── custom.yaml
├── default.yaml
├── development.yaml
└── production.yaml
The files are located in your game’s config
folder.
Configuration files will be loaded in the following order:
config/default.yaml
: The base configuration file for your projectconfig/[NODE_ENV].yaml
: Configuration for a specific environmentconfig/custom.yaml
: Configuration for your local setup
If you want to load multiple configuration files, you may comma-separate them in your NODE_ENV
like this: NODE_ENV=bob,test
. They will be loaded in order, the latter overriding the former.
Custom configuration is generally used during development; in some cases, developers will need
to specify their own credentials or personalized configuration. Newly created projects will
include a custom file; however, this file will also be added to your .gitignore
file to
avoid any conflicts between each developer’s configuration.
Development
This turns on all options
developmentMode: true
Alternatively, take control by toggling the individual options. The ones you leave out are considered to be set to true. Set any of the following to false to change the default development mode behavior.
developmentMode:
archivistInspection: true # Archivist will do heavy sanity checks on queries and mutations.
To run your game in development, MAGE has a developmentMode
configuration flag. This enables or
disables certain behaviors which make development a bit more convenient. If you want more granular
control over which of these behaviors are turned on or off, you can specify them in an object.
Environment-based configuration
server:
clientHost:
bind: MAGE_APP_BIND:string
mmrp: MAGE_APP_MMRP:bool
You may also add a config/environment.yaml
file to your project: this file serves
as a means to connect environment variables to specific configuration entries. Environment
variables will supersede any configuration set through configuration files.
In this example, we connect MAGE_APP_BIND
and MAGE_APP_MMRP
to our configuration.
Note you may also optionally specify the type to cast the environment variable into.
In this case for instance, we set MAGE_APP_MMRP
as a boolean because we might want to
disable MMRP it by running MAGE_APP_MMRP=false npm run mage
or otherwise setting
the value into the environment.
Dynamic configuration
test/index.js
const config = require('mage/lib/config');
config.set('some.path.to.config', 1)
const mage = require('mage');
// continue with your test code
The moment mage
is required or imported, it will automatically set up
the configuration management API as well as read configuration files. However,
in some cases - such as unit testing - you might want to forcibly disable
certain services, or enforce fixed behaviors.
To do so, you have the option of requiring MAGE’s configuration before, fix your configuration; once you require MAGE, your dynamic configuration will then be applied.
You may dynamically set a new configuration at any time while the MAGE server is running; however, keep in mind that most modules only read configuration entries at runtime, and therefore dynamically changing the configuration after MAGE has been initialized will likely not have any effects.
Modules
External helper modules
External helpers modules are optional but simplify the use of the library. They are different from mage built-in modules and user modules: they don’t expose usercommands, they just act as helpers.
Here are the most important:
Module | Description | TypeScript library |
---|---|---|
mage-console | Mage development console with REPL interface and auto-reload on file change | |
mage-validator | Validation system for MAGE topics and user command input types | ✔ |
mage-module-shard | Helps you to implement modules that act as shards within a MAGE cluster | ✔ |
mage-vaulthelper-couchbase | Helps you to access mage Couchbase backend | ✔ |
mage-module-maintenance | Helps you implement a maintenance mode for your game cluster | ✔ |
mage-https-devel | Toolchain for enabling the use of HTTPS during local development |
Built-in modules
lib/index.js
// Default modules with a fresh mage install
// (with npx mage create [game name])
mage.useModules([
'archivist',
'config',
'logger',
'session',
'time'
]);
In the mage library, some modules are already created to provide some facilities such as session and authentication. The full list of available modules is defined as below:
Module | Description |
---|---|
archivist | Expose usercommands to synchronize data in real time |
config | Expose client config with a usercommand |
auth | Authentication facility |
logger | Logging facility |
session | Session facility |
time | Time manipulation facility |
To see the default modules on a fresh install of mage and how they can be set up, see the example on the right side.
Note that the auth
module is not activated by default.
Set up the auth module
To set up the auth module, adding it to mage.useModules
is not sufficient, a basic configuration is needed.
See the example on the right side for a basic configuration.
Here are the different hash types you can use:
Type | Description |
---|---|
pbkdf2 | pbkdf2 algorithm |
hmac | hmac algorithm |
hash | Basic hash |
lib/index.js
mage.useModules([
'auth'
]);
lib/archivist/index.js
// a valid topic with ['username'] as an index
exports.auth = {
index: ['username'],
vaults: {
myDataVault: {}
}
};
config/default.yaml
module:
auth:
# this should point to the topic you created
topic: auth
# configure how user passwords are stored, the values below are the
# recommended default
hash:
# Please see https://en.wikipedia.org/wiki/PBKDF2 for more information
type: pbkdf2
algorithm: sha256
iterations: 10000
hash:
# Please see https://en.wikipedia.org/wiki/HMAC for more information
type: hmac
algorithm: sha256
# Hex key of any length
key: 89076d50860489076d508604
hash:
# Basic hash
type: hash
algorithm: sha1
File structure
mygame/
lib/
modules/
players/ -- the name of our module
index.js -- the entry point of the module
usercommands/
login.js -- the user command we use to login
Modules are a way to separate concerns; they contain groups of functionality for a certain subject (users, ranking, shop, etc), and expose API (called user commands) that are accessible through the different client SDKs.
Module setup and teardown
lib/modules/players/index.js
exports.setup = function (state, callback) {
// load some data
callSomething('someData', callback);
};
exports.teardown = function (state, callback) {
// load some data
callSomething('someData', callback);
};
MAGE modules can optionally have the two following hooks:
- setup: When the server is started
- teardown: When the server is about to be stopped
The setup function will run when MAGE boots up allowing your module to prepare itself, for example by loading vital information from a data store into memory. This function is optional, so if you do not have a setup phase, you don’t need to add it.
Teardown functions in a similar way; you may want to store to database some data you have been keeping in memory, or send some notifications to your users.
Note that by default, each hook will have 5000 milliseconds to complete; should you need
longer than that, you will need to set exports.setupTimeout
and exports.teardownTimeout
respectively to the value of your choice.
Module methods
lib/modules/players/index.js
var mage = require('mage');
exports.register = function (state, username, password, callback) {
var options = {
acl: ['user']
};
mage.auth.register(state, username, password, { options }, callback);
};
You will then want to add methods to your modules. These are different from your API endpoints; they are similar to model methods in an MVC framework, but they are not attached to an object instance.
This example shows how you could create a quick player registration method.
User commands
lib/modules/players/usercommands/register.js
var mage = require('mage');
// Who can access this API?
exports.acl = ['*'];
// The API endpoint function
exports.execute = function (state, username, password, callback) {
mage.players.register(state, username, password, function (error, userId) {
if (error) {
return state.error(error.code, error, callback);
}
state.respond(userId);
return callback();
});
};
User commands are the endpoints which the game client will be accessing. They define what class of users may access them, and what parameters are acceptable.
The name of a user command is composed of its module name and the name of the file.
For instance, in the example here, the name of the user command would be players.register
.
The parameters this user command accepts are everything between the state
parameter and the
callback
parameter; so in this case, players.register
accepts a username
parameter, and
a password
parameter.
Our user command also receives a state object. We won’t describe exactly what states are used for yet, but we can see that they are to be used to respond with an error should one occur.
Testing your user command
npm run archivist:create
npm run develop
npm run archivist:create
npm run develop
Before we try to test the code above, we will first need to create a place for the auth module
to store the data. archivist:create
will do just that.
Once this command completes, we’ll start our MAGE project in development mode.
In a separate terminal window
curl -X POST http://127.0.0.1:8080/game/players.register \
--data-binary @- << EOF
[]
{"username": "test","password": "secret"}
EOF
Invoke-RestMethod -Method Post -Uri "http://127.0.0.1:8080/game/players.register" -Body '[]
{"username": "username", "password": "password"}' | ConvertTo-Json
For testing most user commands in MAGE, you would normally need to use one of the client SDKs; however, this example is simple enough for us to be able to simply query the endpoint manually.
You may notice that the content we send is in line-separated JSON format, and that the first thing we send is an empty array; this array, under normal circumstances, would contain credentials and other metadata.
Login
lib/modules/players/index.js
var mage = require('mage');
exports.register = function (state, username, password, callback) {
var options = {
acl: ['user']
};
mage.auth.register(state, username, password, options, function (error, userId) {
if (error) {
return callback(error);
}
mage.logger.debug('Logging in as', userId);
mage.auth.login(state, username, password, callback);
});
};
Now that we can register users, we may want to automatically log in the user by calling
mage.auth.login
. While the registration has not been completed in the database, our state
transaction contains the information for the newly registered user, which will allow for
login
to complete successfully.
For more information about the auth
module, please refer to the
API documentation.
ACLs
lib/modules/players/usercommands/notAccessibleToNormalUers.js
var mage = require('mage');
// Who can access this API?
exports.acl = ['special'];
// [...]
As you may have noticed, mage.auth.register
receives and acl
option allowing
to attach to a give user different access rights. User commands can then in return
list what ACL groups are allowed to acces that user command.
For instance, in the example above, a user registered only with the user
credential
would not be allowed to execute this user command. Only if the ‘user’ or ‘*’ wildcard ACL
were added to the exports.acl
array would a normal user be able to execute it.
By standard, user
is assigned to any registered player, but you may create your own ACL
groups as you see fit.
User command timeout
lib/modules/players/usercommands/register.js
var mage = require('mage');
// Who can access this API?
exports.acl = ['*'];
exports.timeout = 200
// [...]
By default, user command execution will time out after 15,000 milliseconds; however, in some cases, you may want to increase or reduce that value.
To do so, simply set exports.timeout
to your desired value.
Keep in mind that the execution of your usercommand may not get interrupted;
we will return an error on the next access to the user command’s state
(which will in turn ensure that execution is interrupted), but manual operations
unrelated to state
may still complete.
Request caching
curl -X POST http://127.0.0.1:8080/game/players.checkStats?queryId=1 \
--data-binary @- << EOF
[{"key":"be20d767-4067-40d6-92dc-c52067b7d21e:lMftdnXxEFbP3ctq","name":"mage.session"}]
{}
EOF
Invoke-RestMethod -Method Post -Uri "http://127.0.0.1:8080/game/players.checkStats?queryId=1" -Body '[{"key":"be20d767-4067-40d6-92dc-c52067b7d21e:lMftdnXxEFbP3ctq","name":"mage.session"}]
{}' | ConvertTo-Json
By default, responses to requests from authenticated users which contain a numerical identifier in the will be automatically cached; this is so to avoid double-execution when the client disconnects before the response could be sent and the client wishes to retry.
This comes handy for most operations (trades, purchases and so on), but may be undesirable in some circumstances (when you serve computed static data or static data stored in the database).
lib/modules/players/usercommands/checkStats.js
exports.cache = false;
To disable this behavior, simply set cache
to false
in your user command’s definition.
Using async/await (Node 7.6+)
lib/modules/players/index.js
'use strict';
const promisify = require('es6-promisify');
const {
auth
} = require('mage');
exports.register = async function (state, username, password) {
const options = {
acl: ['user']
};
const register = promisify(auth.register, auth);
return register(state, username, password, options);
};
lib/modules/players/usercommands/register.js
'use strict';
const {
players
} = require('mage');
module.exports = {
acl: ['*'],
async execute(state, username, password) {
return players.register(state, username, password);
}
};
lib/modules/players/usercommands/staticData.js
'use strict';
module.exports = {
serialize: false,
acl: ['*'],
async execute(state) {
return '{"static": "data"}';
}
};
If you are using a newer Node.js version, which includes the latest ES2015 and ES2017 language features, you can rewrite the previous API as follows. As you can see, this results not only in much fewer lines of code, but also into a much simpler, easier to read code.
Let’s review what we have done here:
- Variable declaration using
const
, which prohibits the variable to be rebound; - Destructuring, to extract the specific MAGE modules and components we want to use;
- async/await and Promises, to simplify the expression of asynchronous code paths;
Note that due to legacy, most of MAGE’s API are callback-based, which can sometimes conflict with the latest Promise-based API’s; to help with this, we recommend that you install es6-promisify in your project, and then use it to wrap the different MAGE APIs you wish to use.
Also, you can see that the players.staticData
user command serializes data by itself to JSON
instead of relying on MAGE to serialize the return data. This can be useful when you wish to optimize how MAGE
will serve certain pieces of data which will not change frequently. When you wish to do so, simply make
sure that exports.serialize
is set to false
, and to manually return a stringified version of your
data.
Error management
lib/modules/players/usercommands/register.js
'use strict';
const {
players,
MageError
} = require('mage');
class NoHomersError extends MageError {
constructor(data) {
super(data);
// Code to return to the client (default: server)
this.code = 'server';
// Log level to use to log this error (default: error)
this.level = 'warning';
// Details to log alongside the message (default: undefined)
this.details = data.details;
// Error type - traditionally the class name
this.type = 'NoHomersError';
}
}
module.exports = {
acl: ['*'],
async execute(state, username, password) {
if (username.length > 10) {
// Using MageError directly
throw new MageError({
code: 'username_length_exceeded',
level: 'warning',
message: 'Username is more than 10 character',
details: {
received: username
}
})
}
if (username === 'homer') {
// Using a custom error class inheriting MageError
throw new NoHomersError({
message: 'We already have one Homer anyway '
})
}
return await players.register(state, username, password);
}
};
When using async/await
, you will want to customise the behavior of your errors
in regards to how they will be logged, what data will be logged, what message will be
returned to the user, and so on.
To do so, MAGE provides the MageError
error class, which can be used as-is or
extended as needed. This error class will allow you to control precisely what you will
log in case of error, and what should be returned to the user.
States
You may notice that our user commands receive an object called state
as part of the argument list.
Transaction management
States are used for tracking the state of a current user command execution. For instance, you will generally have to do a series of operations which, depending on the overall outcome of your user command execution, should all be stored (or all discarded if a failure occurs). States provide the mechanism for stacking operations which need to occur, and then commit them in one shot.
Transactional storage
lib/modules/players/usercommands/registerManyBroken.js
var mage = require('mage');
exports.acl = ['*'];
exports.execute = function (state, credentialsOne, credentialsTwo, callback) {
mage.players.register(state, credentialsOne, function () {
mage.players.register(state, credentialsTwo, function () {
var error = new Error('Whoops this failed!');
return state.error(error, error, callback);
});
});
};
In the example here, we basically error out on purpose, but the purpose is clear:
we are trying to register two new users. However, because we end up calling state.error
the actions will not be executed; the two mage.players.register
calls we have made
would only store information if the call succeded.
For more information about the API you use to access your data store, please read the Archivist API documentation.
Transactional event emission
lib/modules/players/usercommands/registerAndNotifyFriend.js
var mage = require('mage');
exports.acl = ['*'];
exports.execute = function (state, credentials, friendId, callback) {
state.emit(friendId, 'friendJoined', credentials.username);
mage.players.register(state, credentials, function (error) {
if (error) {
return state.error(error, error, callback);
}
callback();
});
};
The state object is also responsible for emitting events for players. The event system is what enables the MAGE server and MAGE clients to stay synchronized, and is also the first-class communication interface to get players to send data between each others.
Events also benefit of the transactional nature of states; events to be sent to users
will not be sent unless the call succeeds. For instance, in this example, if mage.players.register
were to return an error, the event emitted by state.emit
would never be sent to the other
player referenced by playerId
.
For more information on the State API and how events are emitted, please read the State API documentation.
Actors & Sessions
MAGE defines users as “actors”, who are represented by an ID (the Actor ID).
For an actor to make changes to the database (through a user command) and send events to other users, it will generally need to be authenticated. During authentication, an actor starts a session and is assigned a unique session ID.
As long as a session ID is used and reused by an actor, it will stay active. After a long period of non-activity however, the session will expire and the actor will be “logged out” as it were.
Session module
lib/index.js
mage.useModules([
'session'
]);
Freshly bootstrapped MAGE applications already have the session module activated and configured (including a basic Archivist configuration which will store session-information in memory).
For multi-node MAGE clusters, make sure to change the vault used for sessions to be stored in a shared memory storage solution such as memcached, redis, etc… Since MAGE will need to retrieve sessions to route messages to them properly.
Auth module
Once configured you can just add the auth
module to your useModules
call.
Please see built-in modules part to see how to set up the auth module.
Logging in
lib/modules/players/index.js
exports.login = function (state, username, password, callback) {
mage.auth.login(state, username, password, callback);
};
lib/modules/players/usercommands/login.js
var mage = require('mage');
var logger = mage.logger.context('players');
exports.acl = ['*'];
// The API endpoint function
exports.execute = function (state, username, password, callback) {
mage.players.login(state, username, password, function (error, session) {
if (error) {
return state.error(error.code, error, callback);
}
logger.debug('Logged in user:', session.actorId);
callback();
});
};
The auth module allows us to login. For now, let’s not bother with user accounts and use the anonymous login ability instead. As long as we do this as a developer (in development mode), we can login anonymously and get either user-level or even administrator-level privileges. In production that would be impossible, and running the same logic would result in “anonymous” privileges only. You wouldn’t be able to do much with that.
The session module will have automatically picked up the session ID that has been assigned to us, so there is nothing left for us to do.
When you are ready to create user accounts, please read on about how to use and configure the auth module.
Testing your login user command
curl -X POST http://127.0.0.1:8080/game/players.login \
--data-binary @- << EOF
[]
{"username": "test","password": "secret"}
EOF
Invoke-RestMethod -Method Post -Uri "http://127.0.0.1:8080/game/players.login" -Body '[]
{"username": "test", "password": "secret"}' | ConvertTo-Json
If authentication fails, you will receive [["invalidUsernameOrPassword",null]]
;
otherwise, you should get back an event object containing your player’s session information.
Changing the password
You can change the password for a given user after registration. It works the same way as registration, except
that the user ID needs to exist or you will get a UserNotFoundError
. Note that this does not invalidate current
sessions for this account.
lib/modules/players/index.js
exports.changePassword = function (state, username, newPassword, callback) {
mage.auth.changePassword(state, username, newPassword, callback);
};
Archivist
Archivist is a key-value abstraction layer generally used with state objects.
The player module created through the previous sections already uses Archivist to store data on the local file system; more specifically, the auth module used in the Actors & Sessions section of this user guide uses Archivist behind the scenes to store credentials for newly created users.
Vaults
List, read and write order
./config/default.yaml
archivist:
# When doing "list" operations, will attempt each mentioned vault until successful
listOrder:
- userVault
- itemVault
# When doing "get" operations, will attempt each mentioned vault until successful
readOrder:
- userVault
- itemVault
# When doing "add/set/touch/del" operations, will write to each mentioned vault in the given order
writeOrder:
- userVault
- itemVault
listOrder, readOrder, and writeOrder properties have to be defined in your configuration file to specify the order when reading or writting data.
For read operations (listOrder, readOrder), mage will attempt the mentioned vaults until one is successful. However, for write operations (writeOrder), mage will write to each mentioned vault, in the given order.
The item won’t be written before the player because it will follow the order of the writeOrder configuration
state.archivist.set('item', { userId: userId }, playerData);
state.archivist.set('player', { itemId: itemId }, itemData);
Please note that if you are doing multiple operations in a single state transaction, the order in which the operations will be done correspond to the order provided by your configuration, not the order of the actual calls.
Vaults types
./config/default.yaml
archivist:
vaults:
userVault:
type: file
config:
path: ./filevault/userVault
itemVault:
type: file
config:
path: ./filevault/itemVault
As mentioned, vaults are used by archivist to store data. Currently, the following backend targets are supported:
Backend | Description |
---|---|
file | Store data to the local disk, in JSON files. |
memory | Keep data in memory (does not persist). |
client | Special vault type (client-side archivist support required). |
couchbase | Couchbase interface |
mysql | MySQL interface. |
redis | Redis interface. |
Vaults can have different configuration for different environments, as long as the Archivist API set used in your project is provided by the different vault backends you wish to use.
File vault backend
The file vault can be used to store data directly in your project. A ommon case for the use of the file vault backend is static data storage.
type: file
config:
path: ./filevault
disableExpiration: true # optional (default: false)
operation | supported | implementation |
---|---|---|
list | ✔ | fs.readdir(config.path); |
get | ✔ | fs.readFile('myfile.filevault' and 'myfile.json'); |
add | ✔ | fs.writeFile('myfile.filevault' and 'myfile.json'); |
set | ✔ | fs.writeFile('myfile.filevault' and 'myfile.json'); |
touch | ✔ | fs.readFile('myfile.filevault'); fs.writeFile('myfile.filevault'); |
del | ✔ | fs.readFile('myfile.filevault'); fs.unlink('myfile.filevault' and 'myfile.json'); |
archivist:create
support is done via mkdirp
and just requires that the user
running the command has enough rights to create folders in the project’s path.
Memory
type: memory
The memory vault backend can be used to keep data in-memory for the duration of the execution of your MAGE instance. Data will not be persisted to disk.
operation | supported | implementation |
---|---|---|
list | ✔ | for (var trueName in cache) { } |
get | ✔ | deserialize(cache[trueName(fullIndex, topic)]) |
add | ✔ | cache[trueName(fullIndex, topic)] = serialize(data) |
set | ✔ | cache[trueName(fullIndex, topic)] = serialize(data) |
touch | ✔ | setTimeout() |
del | ✔ | delete cache[trueName(fullIndex, topic)] |
Client
This vault is used to send updates to the player, so that their data is always synchronized in real time.
This vault is always created when an archivist is instantiated by a
State
object, using a name identical to the type: client
.
This vault type requires no configuration.
operation | supported | implementation |
---|---|---|
list | ||
get | ||
add | ✔ | state.emitToActors('archivist:set') |
set | ✔ | state.emitToActors('archivist:set' or 'archivist:applyDiff') |
touch | ✔ | state.emitToActors('archivist:touch') |
del | ✔ | state.emitToActors('archivist:del') |
Couchbase
type: couchbase
config:
options:
# List of hosts in the cluster
hosts: [ "localhost:8091" ]
# Only for Couchbase Server >= 5.0
# User credentials
username: Administrator
password: "password"
# Only for Couchbase Server < 5.0
# Bucket password (optional)
password: "toto"
# optional
bucket: default
# optional, useful if you share a bucket with other applications
prefix: "bob/"
# optional, can use any option specified in https://developer.couchbase.com/documentation/server/5.1/sdk/nodejs/client-settings.html#topic_pkk_vhn_qv__d397e189
options:
# usefull to debug network errors (eg. authentication errors)
detailed_errcodes: 1
# options only used with archivist:create
create:
adminUsername: admin
adminPassword: "password"
bucketType: couchbase # can be couchbase or memcached
ramQuotaMB: 100 # how much memory to allocate to the bucket
For Couchbase Server >= 5.0, options.user
and options.password
have to be set to a user who has access to the options.bucket
.
For Couchbase Server < 5.0, however, you just need to configure options.password
which correspond to the bucket password and is optional.
create.adminUsername
and create.password
need to be configured only if you wish to create the underlying bucket
through archivist:create
, or to create views and query indexes
through archivist:migrate
.
operation | supported | implementation |
---|---|---|
list | ||
get | ✔ | couchbase.get() |
add | ✔ | couchbase.add() |
set | ✔ | couchbase.set() |
touch | ✔ | couchbase.touch() |
del | ✔ | couchbase.remove() |
archivist:create
support requires a separate create
entry in the config
,
meanwhile views should be created/managed in migration scripts.
MySQL
type: mysql
config:
options:
host: "myhost"
user: "myuser"
password: "mypassword"
database: "mydb"
The available connection options are documented in the node-mysql readme. For pool options please look at Pool options.
operation | supported | implementation |
---|---|---|
list | ✔ | SELECT FROM table WHERE partialIndex |
get | ✔ | SELECT FROM table WHERE fullIndex |
add | ✔ | INSERT INTO table SET ? |
set | ✔ | INSERT INTO table SET ? ON DUPLICATE KEY UPDATE ? |
touch | ||
del | ✔ | DELETE FROM table WHERE fullIndex |
archivist:create
support requires that the user is allowed to create databases.
Sample query to create a basic “people” topic table store.
CREATE TABLE people (
personId INT UNSIGNED NOT NULL,
value TEXT NOT NULL,
mediaType VARCHAR(255) NOT NULL,
PRIMARY KEY (personId)
);
Queries against your database are done through a combination of the generated keys and serialized values. A generated key must yield a table name and a primary key. A serialized value must yield a number of column names with their respective values.
For instance, given a topic people
and index { personId: 1 }
, the destination table will need
to have a personId
field, but also a value
field to store data ad a mediaType
field so that
MAGE may know how to process the stored value.
Overriding the serializer
exports.people.vaults.mysql.serialize = function (value) {
return {
value: value.setEncoding(['utf8', 'buffer']).data,
mediaType: value.mediaType,
lastChanged: parseInt(Date.now() / 1000)
};
};
If you want to change how this information is stored, by adding columns, etc, you can overload the
serializer method to do so. For example, consider the following example if you want to add a
timestamp to a lastChanged INT UNSIGNED NOT NULL
column.
Redis
type: redis
config:
port: 6379
host: "127.0.0.1"
options: {}
prefix: "key/prefix/"
The options
object is described in the node-redis readme.
Both options
and prefix
are optional. The option return_buffers
is turned on by default by the
Archivist, because the default serialization will prepend values with meta data (in order to
preserve mediaType awareness).
operation | supported | implementation |
---|---|---|
list | ||
get | ✔ | redis.get() |
add | ✔ | redis.set('NX') |
set | ✔ | redis.set() |
touch | ✔ | redis.expire() |
del | ✔ | redis.del() |
Topics
lib/archivist/index.js
exports.player = {
index: ['userId'],
vaults: {
userVault: {}
}
};
Topics are essentially Archivist datatypes; they define which vault(s) to use for storage, the key structure for accessing data, and so on.
In this example, we simply specify a new topic, called player, in which we will be identifying by userId.
Add an expiration time
In your topic config, you can specify a ttl
to make your data expires after a certain amount of time.
ttl
should be a string matching one of the following formats:
- days: “[num]d”
- hours: “[num]h”
- minutes: “[num]m”
- seconds: “[num]s”
Add an expiration time
exports.player = {
// ...
ttl: '1m' // Expire the data after 1 minute
};
Store & retrieve topics
lib/modules/players/index.js
exports.create = function (state, userId, playerData) {
state.archivist.set('player', { userId: userId }, playerData);
};
exports.list = function (state, callback) {
var topic = 'player';
var partialIndex = {};
state.archivist.list(topic, partialIndex, function (error, indexes) {
if (error) {
return callback(error);
}
var queries = indexes.map(function (index) {
return { topic: topic, index: index };
});
state.archivist.mget(queries, callback);
});
};
lib/modules/players/usercommands/register.js
var mage = require('mage');
exports.acl = ['*'];
exports.execute = function (state, username, password, callback) {
mage.players.register(state, username, password, function (error, userId) {
if (error) {
return state.error(error.code, error, callback);
}
mage.players.create(state, userId, {
coins: 10,
level: 1,
tutorialCompleted: false
});
state.respond(userId);
return callback();
});
};
lib/modules/players/usercommands/list.js
var mage = require('mage')
exports.acl = ['*'];
exports.execute = function (state, callback) {
mage.players.list(state, function (error, players) {
// We ignore the error for brievety's sake
state.respond(players);
callback();
});
};
Again, in this example we are leaving the ACL permissions entirely open so that you may try to manually access them; in the real world, however, you would need to make sure to put the right permissions in here.
In this example, we augment the players module we have previously created with two
methods: create
, and list
. In each method, we use state.archivist
to retrieve
and store data. We then modify the players.register
user command, and have it create
the player’s data upon successful registration. Finally, we add a new user command
called players.list
, which will let us see a list of all players’ data.
You may notice that players.list
actually calls two functions: state.archivist.list
and
state.archivist.mget
; this is because list
will return a list of indexes, which we
then feed into mget
(remember, Archivist works with key-value).
You may also notice that while state.archivist.list
is asynchronous (it requires a callback
function), state.archivist.set
is not; because states act as transactions, writes
are not executed against your backend storage until the transaction is completed, thus
making write operations synchronous. This will generally be true of all state.archivist
APIs; reads will be asynchronous, but writes will be synchronous.
Testing storage
curl -X POST http://127.0.0.1:8080/game/players.list \
--data-binary @- << EOF
[]
{}
EOF
Invoke-RestMethod -Method Post -Uri "http://127.0.0.1:8080/game/players.list" -Body '[]
{}' | ConvertTo-Json
We can re-use the previous command to create a new user; once we have done so, we can use the following command to retrieve the data we have just created.
Key-based filtering
lib/archivist/index.js
exports.item = {
index: ['userId', 'itemId'],
vaults: {
itemVault: {}
}
};
lib/modules/items/index.js
exports.getItemsForUser = function (state, userId, callback) {
var topic = 'item';
var partialIndex = { userId: userId };
state.archivist.scan(topic, partialIndex, callback);
};
With certain APIs you can provide only a portion of an index and search for all indexes who have the same value for that option. These indexes are referred to as partial indexes.
There are a few ways by which you can split and filter the data stored in your topics:
- You can use archivist.list to list all the indexes matching a given partial index
- You can use archivist.mget to fetch multiple indexes at once
- You can use archivist.scan, which combines both operations mentioned above into one
You will generally want to use scan
for most of your operations, but in some cases, you will find
that manually fetching the list of indexes using list
and applying your own custom filtering before
calling mget
will give you a better result.
Limiting access
lib/archivist/index.js
exports.item = {
index: ['userId', 'itemId'],
vaults: {
client: {
shard: function (value) {
return value.index.userId;
},
acl: function (test) {
test(['user', 'test'], 'get', { shard: true });
test(['cms', 'admin'], '*');
}
},
inventoryVault: {}
}
};
In most cases, you will want to make sure that a given user will only be able to access data they have the permission to access.
There are primarily two ways to limit access to topics:
- shard function: used to filter what data can be viewed;
- acl function: used to determine if the data can be accessed;
In this example, we use the shard function to limit returned data to only data which matches the userId.
We then use the acl function to only allow users and tests access to
the get
API, but full access to CMS users and administrators.
Database Creation
Some vaults also support database/vault creation via the archivist:create
command as long
as admin credentials are properly configured, see each vault’s configuration on how to use it.
List of archivist:create
enabled backends:
file
couchbase
mysql
It is not recommended to use those features in production.
Migrations
MAGE supports database migration scripts similar to Ruby on Rails 2.1
which are aligned with your package.json
version and exposed via the archivist:migrate
command.
Running the archivist:migrate
command will go through each migrations in order up to the current
package.json
version and apply them. The command also allows specifying an exact version allowing
for testing new migrations or reverting to a previous version.
How to write migration scripts
Migration scripts are single files per vault and per version. These files are JavaScript modules and
should export two methods: up
and down
, to allow migration in two directions. You are strongly
encouraged to implement a down
migration path, but if it’s really impossible, you may leave out
the down
method. Keep in mind that this will block rollback operations. See the sample on the
right side for more details.
exports.up = function (vault, cb) {
var sql =
'CREATE TABLE inventory (\n' +
' actorId VARCHAR(255) NOT NULL PRIMARY KEY,\n' +
' value TEXT NOT NULL,\n' +
' mediaType VARCHAR(255) NOT NULL\n' +
') ENGINE=InnoDB';
vault.pool.query(sql, null, function (error) {
if (error) {
return cb(error);
}
return cb(null, { summary: 'Created the inventory table' });
});
};
exports.down = function (vault, cb) {
vault.pool.query('DROP TABLE inventory', null, cb);
};
The migration file goes to your game’s lib/archivist/migrations
folder into a subfolder per vault.
This folder should have the exact same name as your vault does. The migration file you provide
should be named after the version in package.json
and have the extension .{js,ts}
. Other file
extensions are ignored.
Some typical examples:
lib/archivist/migrations/<vaultname>/v0.1.0.js
lib/archivist/migrations/<vaultname>/v0.1.1.js
lib/archivist/migrations/<vaultname>/v0.2.0.js
The callback of the up
method allows you to pass a report, that will be stored with the migration
itself inside the version history. In MySQL for example, this is all stored in a schema_migrations
table, which is automatically created.
How to execute migrations
Migrations can be executed by calling some specific CLI commands, which are detailed when you run
npm run help
or mage --help
. They allow you to create a database, drop a database, and run migrations.
This is what they look like:
archivist:create [vaults] create database environments for all configured vaults
archivist:drop [vaults] destroy database environments for all configured vaults
archivist:migrate [version] migrates all vaults to the current version, or to the version requested
Maximum data size (warning and errors)
archivist:
size:
# Default set to 1Mb (1024 kilobytes); set to false to disable
warning: 256
# Disabled by default
error: 512
MAGE will keep track of all data that is written into vaults, and automatically log a warning when the data size reaches a certain threshold. Optionally, you can also configure MAGE to throw an error should the data reach a certain size.
You may configure this behavior by configuring archivist.size.warning
and archivist.size.error
in your configuration files. By default, MAGE will never throw an error no matter how big your
data gets, but will log a warning should your data be bigger than 1 megabyte.
Events
We have seen in the States section of this user guide that states can be used to emit events between players. Before explaining how to send events, we will first see how MAGE send events to the clients, and how to configure the server to do so.
How does it work ?
Events sending and receipt is done via Message Stream.
Message Stream is a protocol used by MAGE servers and its clients in order to communicate. This protocol is implemented through different transports, which are the following:
- short-polling: The client send a request to the server and get an instant response with the events. The client repeats this every X seconds to receive new events.
- long-polling: The client send a request to the server and keep the connection open. The server will send a response with the events once one or multiple events destinated to the client are received. Then, the client open a new connection and repeat the process.
- websocket: Real time events sending and receipt.
If you need more information about Message Stream protocol, you can read this documentation.
In your MAGE config file, you can specify the priority of the transports which will be used by the client SDK to receive the events.
config/default.yaml
server:
msgStream:
detect:
- websocket
- longpolling
- shortpolling
You can also configure longpolling transport:
config/default.yaml
server:
msgStream:
transports:
longpolling:
heartbeat: 60
For the longpolling
transport, you can specify a heartbeat
config, which correspond to the number of seconds until a request expire and automatically close.
Now we have seen how to configure your server to send and receive events, we will now see how to send events with the State object.
Sending events
lib/modules/players/usercommands/annoy.js
exports.acl = ['*'];
exports.execute = function (state, actorId, payload, callback) {
state.emit(actorId, 'annoy', payload);
callback();
};
When a user command is executed, you can stack many events to be emitted once the user command succeeds. Those events will then be sent synchronously to the destination.
Sending asynchronous events
lib/modules/players/usercommands/bombard.js
var State = require('mage').core.State;
exports.acl = ['*'];
exports.execute = function (state, actorId, payload, callback) {
var asynchronousState = new State();
var count = 0
function schedule() {
setTimeout(function () {
asynchronousState.emit(actorId, 'annoy', payload);
count += 1;
if (count === 100) {
return asynchronousState.close()
}
schedule();
}, 1000);
}
schedule();
callback();
};
In some cases, you might want to emit events not attached to a user command. For instance, you may want to send an event after a certain amount of time, or once something has changed in the database. To do so, you will need to create your own State of that. You will also need to make sure to manually close that state.
For more information, please read the State API documentation.
Broadcasting events
lib/modules/players/usercommands/annoyEveryone.js
exports.acl = ['*'];
exports.execute = function (state, payload, callback) {
state.broadcast('annoy', payload);
callback();
};
In some rare case, you might want to emit events to all users. For instance, you might want to warn connected users of an upcoming maintenance, or of other events which might affect them.
In such cases, you will want to use state.broadcast
to send the event to
everyone. Keep in mind that broadcasting to all may affect your overall load
if you have many players connected simultaneously.
Time manipulation
lib/modules/myModule/index.js
const {
time,
logger
} = require('mage')
// Accelerate time by a factor of 5
time.bend(0, 5)
exports.method = function (state) {
const now = time.sec();
if (now > lastLogin + (60 * 60 * 24)) {
// do daily login bonus
}
state.archivist.set('someTopic', { userId: state.actorId }, {
time: now
});
}
Some types of games are meant to be played over fixed period of time; some game features may also be time-sensitive. For instance, you might want to give daily bonuses on player log in.
However, during testing, you probably do not want to wait for a whole day to
see if your player will be awarded a bonus. To deal with this issue, you can
use the MAGE built-in mage.time
module. This module allows developer
to slow down or accelerate time from the MAGE process’ perspective.
This feature is often refered to as time bending
Note that this does not affect existing APIs (such as setTimeout
,
setInterval
, or the Date
class); instead, you will need to use the
the time module to compute a time value from the server’s perspective,
then use that value as a setTimeout/setInterval/new Date call argument.
For more information on how to use the time library, see the time module API documentation. You may also want to have a look at the following libraries whenever dealing with time and time bending:
Logging
Logging is an important part of running production-ready game servers. MAGE ships with its own logging API, which developers can use and configure in different ways depending on the environment they are running the server on.
Log levels (channels)
var mage = require('mage');
var logger = mage.logger;
logger.debug('hello world');
logger.info.data({
debug: 'data'
}).log('trying to do something');
logger.error('It broke', new Error('this error stack will be parsed and formatted'));
Log channels define the level of priority and importance of a log entry. Just like in most systems, the level of verbosity of a MAGE server can be configured; during development, you will probably want to show debug logs, while in production seeing warnings and errors will be sufficient.
The following channels are provided in MAGE
Channel | Description |
---|---|
verbose | Low-level debug information (I/O details, etc); reserved to MAGE internals |
debug | Game server debugging information |
info | User command request information |
notice | Services state change notification (example: third-party services and databases) |
warning | An unusual situation occurred, which requires analysis |
error | A user request caused an error. The user should still be able to continue using the services |
critical | A user is now stuck in a broken state which the system cannot naturally repair |
alert | Internal services (data store API calls failed, etc) or external services are failing |
emergency | The app cannot boot or stopped unexpectedly |
Log contexts
lib/modules/players/index.js
var mage = require('mage');
exports.logger = mage.logger.context('players');
lib/modules/players/usercommands/log.js
var mage = require('mage');
var logger = mage.players.logger.context('log');
exports.acl = ['*']
exports.execute = function (state, callback) {
logger.debug('This log is very contextualized');
callback();
};
Log output
w-12345 - 23:59:59.999 <debug> [gameName players log] This log is very contextualized
In addition to the channel, you may want to set a logger context to help you sort out log output. Developers are free to use contexts as they see fit.
In this example, we first attach the players
context to the logger that will
be used at the module level, and then expose it; then, in a user command,
we add an additional context specific to the user command, and use the resulting
logger to simply log a message.
In the terminal, you would then see the following log output. Notice that the context is now appended to the terminal output.
Log backends
The following logging backends are provided:
type | Description |
---|---|
terminal | Log to the console |
file | Log to local files |
syslog | Log to syslog through UDP |
graylog | Log to GELF |
Terminal
Disable terminal logging
logging:
server:
terminal: false
Terminal config example
logging:
server:
terminal:
channels: [">=info"]
config:
jsonIndent: 2
jsonOutput: false
theme: default
The terminal log backend can be configured for pretty-logging, which makes reading log entries in your console more visually comfortable.
The following themes are available:
- default
- dark
- light
The terminal log backend may also be configured for piping logs to an external process. For instance, you may be deploying your MAGE in PaaS or IaaS which simply forwards and parses stdout/stderr output.
In such case, you can turn the jsonOutput
configuration entry to true
;
each log line outputed will then be outputed as a JSON object.
File
File config example
logging:
server:
file:
channels: [">=debug"]
config:
path: "./logs"
jsonIndent: 2
mode: "0600" # make sure this is a string!
fileNames:
"app.log": "all" # this is configured by default and you may override it
"error.log": ">=warning"
The file log backend allows you to output logs to a set of file of your choice. Simply
specify a log directory where you want your log files to go, and a set of filenames to log to.
You can control which log level will go in which file by setting the log level range
as a value, or specify all
if you wish to write all logs to a single file.
Syslog
Syslog config example
logging:
server:
syslog:
channels: [">=debug"]
config:
host: localhost # host to connect to (IP or hostname)
port: 514 # UDP port to connect to
appName: myGame
facility: 1 # see syslog documentation
format:
multiLine: true # allow newline characters
indent: 2 # indentation when serializing data in multiLine mode
The syslog log backend allows you to forward your logs to a remote syslog server using the UDP protocol.
MAGE does not currently support forwarding to syslog TCP servers; because of this, it also does not support using TLS.
Since only UDP is supported, it also means that some logs may not arrive to destination.
Graylog
Graylog config example
logging:
server:
graylog:
channels: [">=info"]
config:
servers:
- { host: "192.168.100.85", port: 12201 }
facility: Application identifier
format:
multiLine: true # allow newline characters
embedDetails: false # embed log details into the message
embedData: false # embed data into the message
The syslog log backend allows you to forward your logs to a remote graylog2 server, or to any other services capable to consuming the GELF protocol.
Such services include:
You may choose to embed details and data into the message, instead of having them
as separate attributes. If so, respectively turn embedDetails
and embedData
to true
.
HTTP Server
You might end up in a case where you would like to do one of the following:
- Serve files from your MAGE server (useful when developing HTML5 games)
- Serve service status files (or content)
- Proxy requests to a remote server through MAGE
To this end, MAGE provides to developers an API that will allow you to do such things.
Cross-Origin Resource Sharing (CORS)
If you want your application to span multiple domains, you need to enable CORS. For information on what CORS is, please consult the following websites.
Performance
Keep in mind that using CORS will often cause so-called “preflight” requests. A preflight request
is an HTTP OPTIONS
request to the server to confirm if a request is allowed to be made in a manner
that is considered safe for the user. Only after this confirmation will a real GET
or POST
request be made. All this happens invisible to the end user and developer, but there is a
performance penalty that you pay, caused by this extra round trip to the server.
Using authentication and CORS
If you use Basic
or any other HTTP authentication mechanism, you cannot configure CORS to allow
any origin using the *
wildcard symbol. In that case, you must specify exactly which origin is
allowed to access your server.
Configuration
In your configuration, you can enable CORS like this:
server:
clientHost:
cors:
methods: "GET, POST, OPTIONS"
origin: "http://mage-app.wizcorp.jp"
credentials: true
methods
(optional) lists which HTTP request methods are acceptable.origin
(optional) sets the required origin, may be (and defaults to)"*"
credentials
(optional) must be set totrue
if you want cookies and HTTP authentication to be usable at all. You can then no longer use the wildcard origin.
Log Configuration
server:
quietRoutes: # Filter out debug and verbose logs for URLs matching these regex
- ^\/check\.txt
- ^\/favicon\.ico
longRoutes: # Filter out long warnings for U##### What does it change for other browsers?
Absolutely nothing. And you can still apply the retry logic on network errors, it's not a bad idea.RLs matching these regex
- ^\/msgstream
longThreshold: 500 # The number of milliseconds before a request is considered to be taking too long
The following logs will be filtered out when the url for the request matches any regex in the
quietRoutes
:
m-28019 - 19:58:44.830 <debug> [MAGE http] Received HTTP GET request: /check.txt
m-28019 - 19:58:44.830 <verbose> [MAGE http] Following HTTP route /check.txt
The following log will be shown for any http request that takes longer than the configured
longThreshold
and can be filtered out when the url for the request matches any reges in the
longRoutes
:
m-876 - 20:08:58.193 <warning> [MAGE http] /app/pc/landing completed in 1181 msec
Clustering
At some point during development, you will want to start looking into deploying multiple instances of your game servers; once you do, you will need to change your configuration to tell MAGE instances:
- How they can find each others (through service discovery);
- How they can connect to each others (through MAGE’s Message Relay Protocol, or MMRP).
This section will cover how you can configure these two services for different development and production use-cases.
Cluster identification
server:
serviceName: applicationName-environmentID
Currently the message server system will identify itself as being a part of a cluster by using the application root package name and version. However in environment where multiple instances of the same application and version are run, there will be conflicts with the messaging system. (e.g. inside single box which houses multiple test environments).
To prevent pollution and contamination of messages, the server.serviceName configuration entry needs to be set, to give each environment a unique identifier.
Service discovery
server:
serviceDiscovery: false
Service discovery takes care of letting each MAGE servers where they can find the other servers.
Engine | Description |
---|---|
single | In-memory discovery service (useful during development) |
mdns | Bonjour/MDNS-based service discovery |
zookeeper | ZooKeeper-based service discovery |
consul | Consul-based service discovery |
By default, service discovery is disabled. To enable it, you will need to specify what engine you wish to use, and what configuration you wish to use for that engine.
Single
server:
serviceDiscovery:
engine: single
The single engine can be used during development to locally allow the use of the service discovery API. It will only find the local server.
Bonjour/MDNS
server:
serviceDiscovery:
engine: mdns
options:
# Provide a unique identifier. You will need to configure
# this when you have multiple instances of your game cluster
# running on the same network, so to avoid MAGE servers from one
# cluster from connecting to your cluster.
description: "UniqueIdOnTheNetwork"
The MDNS engine will use MDNS broadcasts to let all MAGE servers on a given network when new servers appear or disappear.
This engine is very convenient, since it allows for service discovery without having to configure any additonal services. However, note that certain network (such as the ones provided by AWS) wil not allow broadcasts, and so you will not be able to use this engine in such case.
ZooKeeper
server:
serviceDiscovery:
engine: zookeeper
options:
# The interface to announce the IP for. By default, all
# IP are announced
interface: eth0
# List of available ZooKeeper nodes (comma-separated)
hosts: "192.168.1.12:2181,192.168.3.18:2181"
# Additional options to pass to the client library.
# See https://github.com/alexguan/node-zookeeper-client#client-createclientconnectionstring-options
# for more details
options:
sessionTimeout: 30000
The zookeeper engine will use ZooKeeper to announce MAGE servers.
Consul
server:
serviceDiscovery:
engine: consul
options:
# Interface to announce
interface: enp4s0
# optional
consul:
host: consul.service.dc.consul
The consul engine will use Consul to announce MAGE servers.
MAGE Message Relay Protocol (MMRP)
server:
mmrp:
bind:
host: "*" # asterisk for 0.0.0.0, or a valid IP address
port: "*" # asterisk for any port, or a valid integer
network:
- "192.168.2" # formatted according to https://www.npmjs.com/package/netmask
MMRP (MAGE Message Relay Protocol) is the messaging layer between node instances. It is used to enable communication between multiple MAGE instances and between the different node processes run by MAGE. In the end, it allows messages to flow from one user to another.
MMRP depends on service discovery to announce relays on the network to each process. The protocol and library used to communicate between processes is ZeroMQ.
Metrics
MAGE exposes different metrics out of the box.
Configuration
config/default.yaml
sampler:
sampleMage: false
intervals:
metrics: 1000 # Sampling time window, in milliseconds
There are essentially two configuration elements which you can set in your configuration:
- sampleMage: activate or deactivate MAGE’s internal metrics
- intervals: define endpoint(s) where to expose the metrics.
Depending on your needs, you may wish to configure multiple endpoints, but in many cases, one will suffice.
Accessing metrics
Query sampler
curl http://localhost:8080/savvy/sampler/metrics
Invoke-RestMethod -Method Get -Uri "http://localhost:8080/savvy/sampler/metrics" | ConvertTo-Json
Response
{
"id": 0,
"name": "metrics",
"interval": 1,
"data": {}
}
Unless you turn sampleMage
to true (and you really should!), the amount
of data that will be returned will be minimal.
However, when turning sampleMage
on, you will be able to see things such as number
of state errors, mean latency per user command, and so on.
Adding custom metrics
lib/modules/players/usercommands/countClicks.js
var mage = require('mage');
var sampler = mage.core.sampler;
exports.acl = ['*']
exports.execute = function (state, callback) {
sampler.inc(['players', 'clicks'], 'count', 1);
callback();
};
Trigger clicks
curl -X POST http://127.0.0.1:8080/game/players.countClicks \
--data-binary @- << EOF
[]
{}
EOF
Invoke-RestMethod -Method Post -Uri "http://127.0.0.1:8080/game/players.countClicks" -Body '[]
{}' | ConvertTo-Json
New sampler output
[...]
"data": {
"players": {
"clicks": {
"count": {
"type": "inc",
"values": {
"1": {
"val": 4,
"timeStamp": 1491400000000
[...]
Here we can see a full example of how we can create our own custom metrics and then access them.
Sampler values are defined on-the-fly; therefore, you must be careful when choosing a sampler key for your metrics, so to avoid overlaps.
See the sampler API documentation for more documentation.
Production
In a production environment, you should set NODE_ENV
to production
and provide MAGE with a configuration file called
config/production.yaml
or config/production.json
.
developmentMode
The developmentMode
entry MUST be turned off. You may also choose to run the game with an
environment variable which explicitly turns it off, just in case the configuration went wrong, by
running it like this:
DEVELOPMENT_MODE=false npm start
&{ $env:DEVELOPMENT_MODE="false"; npm start }
server
The “server” entry in the configuration must be set up properly. That probably means:
workers
should be set to a number to indicate a worker count, or even better, totrue
(meaning that as many workers will be spawned as the CPU has cores).serviceDiscovery.engine
should be appropriately selected for this environment.mmrp
must be set up to allow all MAGE servers to communicate with each other on the network.sampler
collects performance metrics. Are you going to use them or should you turn it off?