Author: Jon Woodall – Managing Director, Two Patch Pirates
This meetup was a special one to celebrate one year of BUUG meetups. As such, there were a few differences to usual:
- The meetup was held on a Sunday afternoon rather than the usual Thursday evening.
- The meetup was about double the usual length, having 5 speakers (originally planned to be 6) rather than the usual three.
- The normal beer and pizza were replaced by coffee and birthday cake.
There were about 30 attendees at the meetup – maybe half a dozen of them being Unity employees.
I have attempted to summarise below the content of the five presentations that were made. Apologies to the presenters for any errors – feel free to email me corrections or links to further information.
Visual Effect Graph Demo
This was presented by John O’Reilly from Unity (@John_O_Really).
Visual Effect Graph is new functionality available from version 2018.3 onwards of Unity. At present it’s in preview mode – so should not yet be used in a production environment. To try it out you’ll need at least that version of Unity and will need to select showing test versions in the Unity Package Manager.
VEG is intended to supplement the existing Particle System within Unity (there are situations where you’ll still want to use the old Particle System rather than VEG).
The current Particle System is realistically limited to thousands of particles at a time before you get system slowdown. VEG supports millions of particles at once. The massive performance increased is gained by moving the workload from the CPU to the GPU. As well as allowing magnitudes more particles, VEG allows more complex behaviours for them. The main downside is that the particles within VEG exist only inside the GPU – and so can’t practically be individually accessed and are not interfaced with the main Physics Engine used by the rest of your application.
After explaining what the VEG was, John gave a demonstration of developing a small weather system (focussed on it snowing). It looked surprisingly simple to develop an implementation that looked to a user to contain a fair degree of complexity. VEG isn’t going to be up to production use standard in time for our current couple of projects but is something I’ll investigate for my next one.
The flexibility and scale of VEG make it very suitable for effects like water and fire (a good demo of which was shown) that aren’t practical using the old Particle System.
Design, Build and Operate faster with the PiXYZ Plug-in for AEC
This was presented by Kieran Colenutt from Unity.
PiXYZ (pronounced Pixies) is a Unity plug-in for AEC (Architecture, Engineering & Construction) developed to move 3D data between applications. In addition to moving 3D modelling data, PiXYZ can also move BIM (Building Information Modelling) data – such as materials specifications – and pretty much any other metadata.
One of the main areas of functionality of PiXYZ is that you can apply rules to imported models based on its metadata. A demonstration was given of importing a 3D Model of the offices in which we were having the meetup. Rules were applied adding materials/textures to all components based upon a text label in their meta-data: generating a fully rendered and lighted scene upon import. It is possible to setup automatic reimporting of data when it changes (though you must manually export from the source software) and for rules to be automatically applied when changed data is reimported.
PiXYZ presents a workflow for generating real-time high-quality visualisation of AEC data from multiple source packages (Autodesk /Revit were specifically identified but apparently the full list is long). The presentation obviously barely scratched the surface of the available functionality and, whilst not directly relevant to our area of work it was interesting to see (I do wonder if there’s a game-dev use in building dungeon levels fast).
Top Tips for Marketing your Game on Social Media
This was presented by Chris Nairn from KOBA.
In 2006 Chris was newly graduated from BIM (Brighton Institute of Music), working part-time in Starbucks and looking to form and promote a new band. Whilst looking around at the competition he noticed that even some really crap (in his opinion) bands were doing well – and it appeared to be, at least in part, down to them promoting themselves on Myspace (Back then it was popular). Chris realised that this Social Media thing had potential and resolved to learn all about it – ending up where he is now, as a Social Media consultant (amongst other hats he wears).
Chris asserts that as game developers we need to start from the position of assuming that no one cares about us or our products. We need to move the focus to what our audience wants rather than what we want to sell. Although not explicitly stated by Chris there’s an element of audience selection where, to an extent, you can target an audience that want something you are willing to sell. Chris indicated that it is important to learn about your target/market audience – which needs to be something more precisely defined that just ‘gamers’.
The old way of launching product was to keep until it was very nearly finished. Then make a lot of noise immediately before, during and after launch and hope it sells well. Chris asserts that this is by no means an optimal method any more – that nowadays you need to get engagement during the actual development process. Chris believes that, as developers, we should be documenting what we do and building a community from early in the process. The modern market doesn’t just want to buy end product – they want to be involved in the whole development journey.
Chris was very clear that we should avoid falling into the trap of basing our social media activities on vanity metrics (things like retweets and likes) as they aren’t a useful measure of actual engagement. Social media needs to be about building relationships with potential (and current) customers.
A lot of what Chris said resonates with what our own in-house Social Media guy has been saying to us.
After Chris’ presentation we had an open mic where a few attendees demonstrated projects they were working on. We then devoured the birthday cake and had a chance to chat with one another, before the second set of meetings.
How Not to be Lost in Localisation
This was presented by David Garcia Abril from Shinyuden.
David works for (or with) Shinyuden – a studio that specialise in localisation as well as developing their own games (one of which, Heroes Trials, was demoed in the open-mic before David’s presentation. David works as a localisation specialist, game designer and translator from English to Spanish.
David began by explaining that localisation is more than just translation. A translation will only provide a broad raw meaning equivalent to the original. The goal of localisation is to convey meaning as locals (those whose native tongue is the destination one) would. Conveying meaning can involve more than just spoken (or written) language – things like gestures can have very different meanings in different cultures.
The goal of localisation, broadly, is that players of the game should have an experience as close as possible to the experience that those playing in the original language have. Players shouldn’t be aware (from gameplay) that the game wasn’t originally in their language. The reason to localise is to reach an audience that would be unable (or unwilling) to play the game if it were not localised. Some games need localisation more than others – a complex RPG with riddles in would need it far more urgently than a simple shmup.
David then gave a very good run-through of the process of localisation. It is beyond the scope of this piece for me to document the entire process – aside form anything else, I’d just be parroting the contents of his slides. But there were a few key points that stood out to me:
Begin localisation early in a project. You need to plan so as not to run into problems later in the process (e.g. finding out late on that you used a font which didn’t contain all the characters for a language you are using).
Whenever requesting localisation, it is vital to provide as much contextual information as possible – both about the setting of the material (who is speaking? Who are they speaking to? What are they speaking about?) and about any constraints on the localised material (does it have to fit in a certain sized text box?).
Make sure you are consistent. This is especially true if using multiple translators for the same language – you don’t want to end up where the same English term ends up being translated into different (though equally valid) terms. Maintain a glossary of terms (and names). Where multiple translators are used for material in the same language, all output should be checked by one person responsible for maintaining consistent style.
Avoid graphic text wherever possible – it requires more steps to change in-game (artwork as well as translation) and is much harder to subsequently update.
David then focussed on the importance of proper QA on localisation output. Testing needs to be done by someone with access to the same source materials as the localisation people. Don’t have localisation QA done by someone that can’t read the original language.
David addressed the specific issues associated with spoken rather than written word. The main point being that translation needs to be final before engaging voice actors – as it’s not as simple to correct errors later. He also detailed many of the stages involved in dubbing with, again, it being important to provide as much information as possible to voice actors, so they have a clear understanding of the meaning of what is being said.
David concluded by saying that if we only took one thing away from his presentation it should be the word ‘Context’. Context is essential when localising if you want to end up with high-quality results. At all stages of the process ensure that everyone involved has as much contextual information about what they’re working on as possible.
This was probably the presentation most immediately relevant to my work. The game I’m currently working on has a fair amount of text content (written and spoken). Although we can’t commit to localisation at present, I will do some preparatory work to simplify localisation if English sales suggest there’s a large enough market to warrant localised versions.
An Improved and more Flexible Asset Pipeline with Addressables
The final presentation of the day was made by Ciro Continisio from Unity.
Historically there have been two means of managing assets within Unity. The original method (and still widely used) was simply having them saved into ‘resources’ folders – from which they are loaded in bulk when the game commences. Assets can thereafter be addressed by a text name – leaving code open to major issues if the name of an asset is changed. The entirety of resources must be loaded at game start – leading to long load times – and whenever anything changes the whole project has to be rebuilt, which can be a serious pain in large projects.
A few years back Unity added a new method of managing assets – Asset Bundles.
Asset Bundles allow assets to be grouped together however the developer wishes – and loaded/unloaded to memory when required. This allowed reduced loading times, reduced building times and dynamic content (as pretty much everything about asset bundles can be programmatically changed). Unfortunately, not only CAN you do everything in code with asset bundles, you MUST do everything in code – making them unwieldy to use. And Asset Bundles have no means of managing dependencies – that must be done manually.
Addressable Assets attempts to address (oops) the weaknesses of Asset Bundles. In fact, the Addressable Assets system is built on top of (using) Asset Bundles. The basic principle of it is that when you mark an asset (such as a prefab – which was what was used in this presentation) as addressable the Unity system generates an internal reference to it which is not lost even if the asset is renamed or moved. Further functionality is added by having the ability to group assets together (based on any arbitrary criteria you choose) and by being able to define labels which can be applied to assets then used to access them.
A demonstration was given showing how assets could be moved between being stored locally and being stored remotely. We were also shown how assets could be exchanged on-the-fly without any need to rebuild a project (e.g. your project can load all assets with a certain label – and what it gets will only be resolved at runtime).
It looked easy to swap from using the current system(s) to using Addressable Assets (instead of using GameObjects you use AssetReferences). The change would not be totally without cost – loading behaviour ceases to be entirely deterministic (you can’t be sure of the exact order different assets will load) and you must rely on a call-back to tell you when an asset has loaded. But it looks like Addressable Assets will be the way to go once it comes out of preview – especially for large projects, projects with remote data (e.g. mobile games that load on-demand from servers) and in particular any game wanting to have content updates without the need to reinstall the whole project.
We had 5 good talks – with a good level of audience participation in all (none had a deathly silence when at the end “Are there any questions?” was asked).
Next BUUG (#7 I think) should be in a couple of months.
Jon (longjon (at) twopatchpirates.com)