Using Premake For Your Asset Pipeline

For the last few years I've been using Premake as my go-to build script generator1 for new projects. While Premake has its quirks, thanks to the build scripts being written in Lua it is highly flexible / hackable, and has proven itself viable on many commercial projects. The build scripts and projects generated by Premake are also easier to read than those generated by Cmake in my opinion.

If you are making your own cross platform game with assets like images, shaders, data files, etc. you will likely find yourself needing to do some transformations to those assets before shoving them into your game. This could be compiling shaders for the target platform, converting model files to another format, transcoding videos, or packing images into texture atlases to name a few examples. This process is referred to as your game's 'asset build pipeline'.

There are many ways to hack together an asset pipeline. Often it's a combination of python and shell scripts, or if you have your own editor it could be integrated into that.

In the case of my latest project I had already decided to depend on Premake for build scripts, so I wanted to see if it could also handle asset building, thus avoiding introducing another dependency.

So far Premake has handled the challenge quite well! If you have a small to medium sized game project that needs an Asset Build Pipeline then it might be worth a try. In this article hope to provide a starting point for investigating how this approach might work in your own project, and highlight some stumbling blocks I encountered during my implementation.

Hooking Into Premake

The first question is how do we organise our files so Premake can do something with them.

In my project I have my assets checked into the same repo as my code under assets/source/. Built assets files are placed in assets/built/ (which is added to the .gitignore). Where exactly these folders are located isn't critical, but if file paths can be converted from source to destination by simply swapping the path prefix, you will make your life a lot easier.

Premake allows us to add a new action with newaction, passing it a table with a trigger string, some descriptive help text, and a function to be executed. Premake uses the current working directory of the caller, so as long as you run the script from the project root you can get a table of all your asset files with os.matchfiles('assets/source/**').

newaction {
    trigger = "assets",
    description = "Builds the assets! Run every time you change an asset.",
    execute = function()
        local assetSourcePaths = os.matchfiles("assets/source/**")

        for _, path in ipairs(assetSourcePaths) do
            print(path)
        end

        print('Your asset pipeline code goes here!')
    end
}

With the above new action added to your premake5.lua you can now run premake assets to call your own function, in this case printing a list of everything in the assets/source folder.

Note

Confused by the syntax above? Premake provides newaction as a global function. Lua allows you to omit parenthesis when calling functions with a single table or single string argument. So the above code is equivalent to newaction( { trigger = "assets", description = ..., etc. } ).

Mirroring Files

The most basic way to 'build' an asset is not to build it at all. In other words, doing a simple copy from the source directory to the built directory. This might seem pointless but it gives us a chance to select which files do or don't go into the build, and sets us up for the more complicated build steps we'll create later.

In my implementation I build a list of files to mirror based on their file extension. Then for each file in the source folder I determine if it should be copied to the destination folder based on three criteria:

  1. If the file is missing from the destination folder, do the copy
  2. If the source file and destination file are different sizes, do the copy
  3. If the source file modified time is newer than the destination file modified time, do the copy

You can get the size and modified time for a file using info = os.stat('path/to/file').

Introducing Generator Modules

Generator modules are what we'll use to handle assets that require some kind of transformation. In my implementation a generator module is any Lua file inside assets/source whose file name ends with generator.lua. If we encounter any such files in the source folder we immediately import the module and store the result. Hopefully only trusted people are able to add files to the source assets folder :).

The calling script expects generator modules to return a table with the following format.

-- my-asset-generator.lua

local m = {}

m.rules = function()
    return {
        -- first rule
        {
            inputs = { 'input/file/path', 'another/input/file' },
            ouptuts = { 'output/file/path' }
        },
        -- second rule
        {
            ....
        }
    }
end

m.generate = function(rule)
    return true -- on success
    return false, error_message -- on failure
end

return m

The rules() function returns a table of tables, where each sub-table describes inputs and outputs. The generate() function accepts one of those sub-tables and actually produces the outputs from the inputs.

Notice that rules() is required to know what outputs will be generated before generate() is called. This is necessary for determining when rules should be executed, and for correctly cleaning up orphans which is discussed later. Knowing the outputs ahead of time is usually straightforward, but not always, so it may require doing some work.

Generating An Asset

To explain how asset generation works, I'll use the example of generating an app icon to be used for distribution. Apps on Mac should bundle a .icns containing the icon image at a variety of resolutions. On Windows we want to generate a .ico which gets embedded as a resource in the executable. Both files should use assets/source/icon.png as the source file.

Here's how we describe that with our generator script. Notice that in this case we only have one rule, but we still nest it inside another table for consistency.

function m.rules()

    local inputs = { 'assets/source/icon.png' }
    local outputs = {}

    if _TARGET_OS == 'macosx' then
        table.insert(outputs, 'assets/built/icon.icns')
    elseif _TARGET_OS == 'windows' then
        table.insert(outputs, 'assets/built/icon.ico')
    else
        printError('Icon generation not supported on this platform!', _TARGET_OS)
    end

    return { { inputs = inputs, outputs = outputs } }
end

To actually generate the icon I have the generate() function call out to the appropriate command line tool depending on the platform.

function m.generate(rule)

    local input = _MAIN_SCRIPT_DIR .. '/' .. rule.inputs[1]
    local output = rule.outputs[1]

    if _TARGET_OS == 'macosx' then
        os.executef("tools/scripts/make_mac_icon.sh %s %s", input, output)
    elseif _TARGET_OS == 'windows' then
        os.executef("tools/makeicon.exe %s %s", input, output)
    else
        return false, 'unable to generate icon for the target platform ' .. _TARGET_OS
    end

    return true
end

As you can see, I'm writing the generator using assumptions that only apply to the current project (location of scripts, filenames, etc.). This is fine! You don't always have to write the most generic solution ever. And I would argue this is a case where it pays to do what's necessary and no more.

As much as I like Lua, as scripts get longer, the loose typing and runtime errors become increasingly painful. My preferred approach is to solve only the problem at hand, then to use descriptive error messages to cover the unimplemented paths. If I hit one of those down the line I can take the time to implement it, and if not I saved myself some work.

Adding Build Options

In the above example the code branched based on the host OS (mac or windows) but what if you are building a game on one platform and want to target multiple other platforms?

One way to solve this would be to add an option to select which platform to build assets for. For example if you want your asset system to build shaders for different graphics backends you could do something like the following:

newoption {
    trigger = "graphics",
    value = "VALUE",
    description = "Graphics backend to Use",
    default = "default",
    allowed = {
        {"default", "default for the current platform"},
        {"d3d11", "Compile shaders for DirectX 11"},
        {"console", "Use console specific shader compiler"},
    }
}

Users set the option from the command line like so --graphics=console. You can then query the value using the name of the option in the global _OPTIONS table:

local backend = _OPTIONS['graphics']

if backend == 'd3d11' then
    ...
elseif backend == 'console' then
    ...
end

Generating Code

Generating code can be done much the same as generating assets. In my project I place all generated code under source/generated/ and add that folder to the .gitignore.

You could generate code from a generic data format like json, yaml, or csv. Or add custom annotations to your non-generated game code and parse them e.g. to generate reflection data.

Deleting Orphaned Files

The final step in my implementation is to delete orphaned files form the assets/built/ folder. Some might deem this unnecessary but I found it handy during development when I'm iterating on assets and may not be diligently cleaning up after myself.

The basic idea is to assemble a list of output files you expect the asset build system will generate based on the current inputs, then delete any files in the built folder that are not on the list. Without this if I added test.shader to the source folder, compile the shader by running the asset build pipeline, then change my mind and remove test.shader, the compiled shader would still be lingering in the built folder. This might be fine during development, but I definitely don't want deleted assets to make their way into a public build.

Putting It All Together

To recap, the basic steps are:

  1. Build a list of files currently in the source folder and in the built folder.
  2. Load all the generator.lua modules and build a list of expected outupts.
  3. Delete orphaned files in the built folder, remembering not to delete any expected outputs.
  4. Copy any mirrored files (i.e. files that don't require building).
  5. Execute the generators to build all remaining files.

You can see a simple example implementation here.

Other Handy Tips

Fin

That's all for today. While this wasn't a comprehensive guide I hope it at least piqued your curiosity. If you have questions or feedback you can hit me up on bsky.

Thanks for reading!


  1. A build script generator is a tool for generating the necessary files to build a project e.g. the Visual Studio solution, Xcode project, Makefile, etc.