Critical Play Project: Case Study (Dialogue UI)

The following case studies of two recent narrative games (Fallout 4 and Disco Elysium) will analyse how their dialogue systems are rendered through their user interfaces (UI). Of particular interest to me are how a player physically interacts with dialogue, how games visually present text, and how they communicate feedback on player choice.

Fallout 4

Dialogue options are mapped to four on-screen ‘buttons’ which ‘float’ over the conversation. On a controller, these buttons correspond to the A/B/X/Y (or triangle/circle/square/x) buttons; on mouse and keyboard the buttons correspond to up / down / left / right arrow keys, or can be selected by moving the mouse cursor in the general direction of each button. In contrast to Mass Effect, which places its dialogue options around a radial ‘wheel’ at the bottom of the screen, the options are presented in the centre. The Witcher 3 uses a similar system. This has the effect of drawing the eye into the scene’s cinematic presentation, though oddly more towards the chest area.

Once a dialogue option has been selected, the dialogue menu disappears and the characters are animated speaking their lines; voice acting is used throughout. Often there is some basic camera movement during this, but facial animation is severely limited. Emotional responses are communicated through generic gestures (likely motion captured). Options within a dialogue scene are paraphrased significantly, often to a degree so simplistic it’s difficult to predict whether the chosen response will be in line with player expectation.  Dialogue is captioned by default, and appears at the bottom of the screen. Once a dialogue option is available again, the camera refocuses on the player and the 4-option menu appears again, with new options beside each of its buttons.

‘Checks’ – referring to choices that require successful skill ‘rolls’ to complete felicitously – are divided into easy, medium and hard, and rendered in different colours (yellow, orange, red). The systems at work here are more opaque than in previous Fallout games, which would often communicate percentage chance to the player, based off the strength of their relevant skill. In Fallout 4, a player with a ‘low’ Speech skill has a 10% chance of succeeding a hard Speech check, with this number increasing as their skill grows (but never over 100%).

UI feedback is limited; beyond being notified of quest completion, XP gain or change in reputation, there is a single selection noise. The UI design is in keeping with the diegetic ‘Pip-Boy’ inventory and menu system (the player operates a wrist-mounted LCD computer to perform inventory management and character advancement).

Disco Elysium

Disco Elysium presents its dialogue in a vertically stretched text box, uniformly placed on the right side of the screen. New text is loaded in with a swift swiping animation and a typewriter whirr. While the dialogue is present, the game camera can zoom in or out of the game action (generally it is zoomed in, though often some panning is required to properly compose the shot). This places the readable dialogue at the opposite diagonal to the player character and their companions’ portraits, as well as the player’s health and morale readouts in the bottom left of the screen – both these UI elements act as frames for the in-game action. 

Low- and no-consequence dialogue choices are presented in basic orange; choices that rely on or have been unlocked by facility in a particular skill are marked with [brackets] and coloured in the skill’s colour (blue for intelligence skills, purple for psychological, pink for physical and yellow for speed). Higher consequence dialogue choices – those that involve dice rolls – are displayed in different colours. If a roll is repeatable, it is highlighted in white; if a roll is only attemptable once, it is highlighted in red. Chance-based choices are also accompanied by their percentage success rate (16%, 94%, etc), giving the player insight into both the game’s underlying mechanics and the choice’s likely narrative outcome.

Rolls are communicated by a brief animation of dice ‘rolling’, followed by either an orange-red ‘splash’ effect for failure or a blue-green one for success. This splash extends across the screen, briefly obscuring or tinting the action, and unifying both in-game and UI elements in a single colour palette. The failure splash will often result in a loss of health or morale; the UI element in the bottom left ‘flashes’ as this happens.

Portraits of speaking characters or voices are displayed alongside the dialogue box, slightly intruding on the in-game mise-en-scene. They remain there as long as they are speaking, and are replaced when a new character or voice speaks. Some lines (all in the Final Cut release) are voice acted, and character models are animated in response to significant dialogue choices – the player character performing a roundhouse kick, or another character losing faith in the player character.

Takeaways

The immediate physical reactions of characters, though generic, provide moments of compelling feedback. The resultant player experience is much less focused on the game’s writing than previous entries in the series – a source of some consternation for many players. I don’t know that this is all that bad. 

Fallout 4’s reduction of player dialogue to paraphrasings of their ‘intentions’ is certainly an interesting development – I’m attracted to the idea of a player wanting to achieve an effect or take an action, but not necessarily knowing what they’ll say in order to make it happen. It’s slightly botched here by a lack of consistency (some intentions are questions, some instructions, some subjects), and the fact that dialogue is sometimes of the ‘hub’ school and sometimes of the ‘hatch’ school, as per my other case study. With some thought to maintaining consistency in where choices are placed, and an overhaul of when dialogue choices are demanded, this UI approach could help players reach a more instinctual, reactive mindspace.

Disco Elysium is a compelling case study in how to use high-impact visual and aural design, as well as carefully chosen animation, to make player choices feel impactful. It may suffer from the same informational overload as Planescape: Torment and newer games like Tyranny and Pillars of Eternity, but it uses visual design to signpost players towards plot-significant and ‘characterful’ choices much more effectively. Much work has been put into making the dialogue UI as responsive (or ‘alive-feeling’) as the world (and I’ve not even talked about the inventory or objective menu UIs, which are even more stylish!) – it’s genuinely a pleasure to read.

A tension between the two is the degree to which they expose game systems in dialogue choices. DE’s visual gating of high-consequence choice communicates both story progression goals (if I choose this, I will progress the game state significantly) and gives players a visualisation of the game’s structure. The communication of percentage chance also reinforces a sense of character identity and progression. In Fallout 4, this more numerical approach has been abandoned in favour of a more instinctual presentation, with varying results. I like how it allows the player to sit in uncertainty, but the lack of preview concerning even the likely consequences makes for an uneven experience.

Critical Play Project: Case Study (Dialogue Systems)

In my research for this project, I identified two key approaches to dialogue systems in games. The first system – the so-called ‘hub’ system – has been widely adopted across game genres. The second – which I’ll call the ‘hatch’ system – has been developed almost as a response to the pervasiveness and rigidness of the first. I have also identified contemporary games that use each system, and which provide interesting examples of these systems being used to place players more firmly in the role of ‘actor’ or ‘performer.’

The dialogue ‘hub’

The ‘hub’ system remains the industry standard for designing interactive game dialogue, and has its roots in the earliest instances of branching dialogue – Choose Your Own Adventure books, which presented dialogue options linking to different pages of a printed book. Conversations for the digital adaptation of this system are generally designed to the following rubric:

This system can be characterised by a wealth of apparent choice – the player can pursue many different avenues of inquiry, defining what they want to talk about, with the game (usually) cooperating – but very little actual choice beyond accepting quests or deepening the player’s knowledge of the world and its characters. Options remain mostly static, and any ‘action’-based options are often signposted as ‘leave conversation’ options – ‘begin combat’ / ‘attack’, for example. High-consequence dialogue options (often ‘gated’ behind ‘skill checks’) are similarly bolded out to the player, encouraging them either to act now and skip extraneous information, or hold off until they’ve exhausted all other options. Additionally, the outside game world is often paused or ‘suspended’ while choices are selected.

We can see the power of this system in games like Baldur’s Gate and Planescape: Torment, where the philosophical and emotional beats of the story are delivered through excavating reams of text, both spoken and narrated.

Dialogue in Planescape: Torment

By breaking these up into clear points of investigation, the writers can avoid overwhelming players with information, while also sharpening their attention to key themes or clues; these games also make use of omniscient ‘narrator’ voices on top of character text to add even more context. These detail-oriented, text-heavy games are favourites of players who love reading novels, and owe much to the genre of fantasy fiction and tabletop role-playing from which they draw both mechanical and aesthetic inspiration.

The ‘hub’ system evolved significantly with Bioware’s Mass Effect. This game abandoned visualisation of every line of dialogue in favour of a ‘radial’ system that both paraphrased the content of a player’s chosen line and displayed the likely in-game consequences of that choice (divided mainly between the alignment choices of ‘paragon’ and ‘renegade). This simplification was necessary for a fully-voice-acted game, as well as one that was interested in functioning ‘on cinema time’ rather than ‘on novel time.’

Mass Effect’s radial dialogue system

This design represented a significant shift in player role as well. Rather than functioning as the writer of the game’s script, as in earlier Bioware titles, the player now worked more as a director of a film – less concerned with the word-by-word script-work than by triggering character beats and altering the compositional flow of conversation (Mass Effect ran an advanced-for-its-time cinematic camera system). This system has remained largely unchanged for fifteen years, and features in such modern titles as Fallout 4: initiating dialogue triggers a series of shot / reverse-shot camera movements, pausing or slowing events in the game world outside.

This system is obviously great at developing thoughts; it may be less so at developing relationships or emotions, and I would argue that none of the ‘hub’ systems mentioned allow for the player to truly feel like the ‘actor’ of the story.

2019’s Disco Elysium takes a significant step towards correcting that. Though it returns to the word-focused ‘hub’ presentation of Planescape: Torment and Baldur’s Gate, it makes the player feel more like an actor by innovating on the formula in two subtle ways.

Disco Elysium

Firstly, the game does away with the idea of an omniscient, or at least halfway-reliable, narrator, replacing it instead with an ensemble of voices, representing the protagonist’s fractured internal monologue. Core mental and physical traits like ‘Logic,’ ‘Reaction Speed,’ and ‘Endurance’ are all given voice, but so are more ineffable qualities: ‘Shivers’ and ‘Inland Empire’ representing physical and psychic intuition respectively. Also characterised are the player character’s life experience: ‘Esprit d’Corp’ is the voice of the player character’s level of ‘cop,’ while ‘Electrochemistry’ represents the player character’s facility with, and desire for, drugs.

Disco Elysium’s ensemble of ‘skills’

The more a player relies on or improves these skills, the more that particular voice will be called upon to narrate the action or provide advice. This funnels the player’s experiential attention through the lenses of those voices – touch, hearing, smell or intuition might dominate different players’ stories. This provides for some significantly different playthroughs, but also activates specific imaginative senses. In actor training, the term ‘sense memory’ is often used, and it refers to tying a particular emotional memory onto a sensation (trauma onto the smell of gasoline, attraction onto the texture of chocolate). By experiencing the sense during performance, the actor can relive the emotional memory, and certain actors find different senses more useful for this. DE’s dialogue system replicates this to a compelling degree, providing the player with the experience of being a sensationally specific character.

Secondly, DE introduces time. Mass Effect and Fallout 3 can be rightly critiqued for the dissonance between the fast-paced action gameplay and the glutinous slowdown of their dialogue scenes. The writers and voice actors of these games work hard to convey a sense of immediacy, but no amount of visual or emotional pyrotechnics can distract from the fact that what the player is really engaged in is a Choose Your Own Adventure novel. Even with all the background explosions, climactic music etc, talking still feels like it ‘suspends’ the world. In DE, however, every dialogue choice advances the game’s clock by three ‘minutes’. Bar reading a book or sleeping, this is the only way to advance time, and given that events in the story are tied to particular times of day, and that the central mystery is a time-sensitive affair, your (and your character’s) relationship to time is of extreme narrative significance.

The dialogue ‘hatch’

The second system is slightly more difficult to analyse, as one of its chief components is a lack of concern with showing how it works. As detailed in Jon Ingold’s AdventureX 2018 talk ‘Sparkling Dialogue’, it is concerned with fostering meaningful choice by limiting dialogue options. As such, I have chosen the metaphor of the service hatch to describe it. A waiter receives meals through the service hatch at a restaurant; behind the hatch is the kitchen, and until the meals are ready it remains closed. When the waiter is in position and the correct meals are ready, the hatch opens. The waiter selects the meal they need, and the hatch is closed. It’s difficult to conduct a comprehensive analysis without looking at the system’s underlying code, but it broadly functions broadly like this:

Inkle’s games, like Heaven’s Vault and 80 Days, make heavy use of these weighting systems to provide seamless and context-bespoke options to the player. Dialogue choices are often trinary or binary, and uniformly short in length.

Choosing the next step in Heaven’s Vault

The immediate advantage of this system is that players are no longer choosing between options that might make no sense to them. NPCs ‘act’ towards the player in more complex and subtle ways (certainly more so than Bioware or Bethesda’s alignment / reputation systems). The visual design of the system itself also becomes incredibly flexible, and can procedurally inform other game systems. For example, Heaven’s Vault relies on a procedural camera positioning system which reacts to weights based on player position and dialogue choices in order to determine shot composition; Pendragon reads the state of its game board, including previous player ‘moves’, in order to generate its dialogue options between characters; the player’s moves after a choice further inform the development of that dialogue.

Pendragon’s dialogue system

All of this can only be good for story cohesion and player immersion, and the ways in which story systems have been incorporated within games already is inspirational. However, sometimes the exposure of a game’s systems is key to how players ‘grok’ it, and I find that, more often than not, I need someone to explain how the narrative systems in Inkle games are actually working in order to appreciate their elegance and responsiveness. While its intention might be to make the player feel more in line with the character – by limiting player knowledge to the player character’s immediate knowledge – in effect what it does is make the player feel more like a disciplined writer, and rarely (if ever) into the ‘actor’ position.

One game that bucks this trend is Signs of the Sojourner. Now, whether Signs of the Sojourner actually uses an Inkle-style system under its hood I’ve no idea (it’ll be worth contacting the writer and developer should I continue this kind of research for my thesis), but on its surface it shares many similarities. Dialogue responses seem to be heavily context-dependent, and ‘options’ are almost non-existent, beyond who you talk to first in a location. Instead, the player utilises a deck of ‘cards’, each representing different emotional approaches. By matching them with their scene partner’s cards, the player either progresses the conversation constructively, or allows it trail off.

A conversation in Signs of the Sojourner

After each conversation, the player must discard one card from their deck, replacing it with one of the NPC’s. This way, their future conversations are informed by their past ones, both mechanically and narratively – most importantly, the player has an easily referenced record of how this has happened. Many ‘hatch’-based games rely upon your ability to retain informational details in order to build up a sense of your character, and expect you to perform consistently with choices you have made during your playthrough. Not every player is likely to be this kind of ‘method player’, and games must provide shortcuts to memory and emotional association in order to remain accessible to attention-poor players – Signs of the Sojourner’s matching system does this with a careful simplicity. Players are reminded of their connections with other characters by the makeup of their deck, and – in an elegant mechanical metaphor – the more it changes the more their relationships with earlier characters shift.

Critical Play Project: Research

This development log will track and summarise some of the research undertaken during this project.

I began this project with the objective of making a philosophically engaged game, specifically connected to language and dialogue. With most narrative-driven games using dialogue systems to simulate acts of speech, I decided that J. L. Austin’s seminal text How To Do Things With Words would be the start of my research, and might be a source of inspiration in how to approach the creation of alternative dialogue systems. From Austin I would move to John Searle, and then began connecting my own background in actor training and theory to some writings on constructing game dialogue.

How To Do Things With Words (J. L. Austin)

A series of lectures delivered by the philosopher J. L. Austin between 1955 and 1962, this collection forms the basis of much modern philosophy of language.

Central to Austin’s proposition is a categorisation of spoken language into three different acts:

  • Locutionary acts, in which the intention or meaning of a speech act is bound up in what is said.
  • Illocutionary acts, in which the intention is separated from what is being said. For example, the phrase ‘is there any salt?’ often contains the illocutionary request ‘please pass me the salt.’
  • Perlocutionary acts, in which – thanks to particular contexts – a change is made to the world of the speaker. Perlocutionary acts are conditional, in that they only have an effect if specific conditions are met: saying ‘I now pronounce you to be married’ has no effect if the speaker is not legally ordained to marry two people, nor will it have an effect if there is no couple present to be married; if these conditions are met, though, then two people become a legal entity known as a marriage! 

The consequence of a speech act that doesn’t rely on true and false statements can also be defined as either ‘felicitous’ or ‘infelicitous,’ depending on whether the consequence is in line with the speaker’s intention. These will prove useful definitions when thinking about dialogue.

What Is A Speech Act? (John Searle)

Searle extends Austin’s work, breaking down language into rules, propositions and meaning. It is the last of these that most concerns the writing of dialogue. In order to analyse meaning within speech, we must define aspects both intentional (what the speaker is trying to achieve with the speech act) and conventional (what is commonly understood as being meant by the speech act). I would contend that, in the pursuit of directing the player’s attention, game writing has a tendency to over-clarify both of these aspects, and that an opportunity for play might exist in the interplay between them, and perhaps even in their muddying. 

Sparkling Dialogue: A Masterclass (Jon Ingold)

One of many talks and articles written by the lead developer of British game studio Inkle, this one focuses on simple techniques to sharpen up game dialogue.

In his analysis of an excerpt from an Assassin’s Creed game, Ingold makes some gentle criticism of the similarity between the lines – in order to hammer home a few crucial pieces of information, the writer essentially repeats the same line three times. It doesn’t take a huge leap to see some of Austin and Searle in Ingold’s thinking. The dialogue in Assassin’s Creed is locutionary, with its intention almost entirely bound up in its content. The result is dialogue full of information but devoid of subtext and drama. 

Ingold also outlines his own writing approach, which largely consists of three dialogue propositions: accepting, rejecting or deflecting. He maps these to three player types (those who prefer to dig deeper, those who want to get to the point, and those who want to disrupt narrative), but also to what he has found to be historically useful in writing dramatic game dialogue, namely that ensuring every player choice is in direct relation to the previous line of dialogue increases narrative engagement. The other upside to writing in this mindset is that if any programming needs to reference dialogue choices in terms of ‘attitude’, half the work of categorising them is already done!

Instant Acting (Jeremy Whelan)

One of my favourite books on actor technique, this connects elegantly with Ingold’s suggestion of ‘accepting, rejecting and deflecting.’ Whelan maintains that every action on stage (and indeed in life) can be broken down into three reactions – being ‘impelled’ towards someone or something; being ‘repelled’ away from it, or being ‘compelled’ to stay still in relation to it. By slowing down rehearsal (using a technique involving the recording and playback of spoken lines), actors can discover just how much they are impelled / repelled / compelled by each line they say and hear, and so uncover deeper emotions.

While Ingold supposes that certain types of players will ‘tend’ towards particular options (and hence that always writing these three can subtly ensure engagement across all player profiles), Whelan might contend that a player’s choice would always be contingent on what the line meant to them in that moment, and that creating the space for that reading to be fully felt might allow for a deeper, more considered, and more varied player response.

The Actor and the Target (Declan Donnellan) / Actions, The Actors’ Thesaurus (Marina Caldarone & Maggie Lloyd-Williams)

Declan Donnelan’s manual for actors defines that everything a character does (including what they say) is always in relation to another character’s perception of them. We never ‘see’ other people; rather, we see the image of ourselves that they see, projected back at us, and our egos are in a constant battle to change that image. 

Underpinning this work is Max Stafford-Clarke’s technique of ‘actioning.’ Actioning is the process of applying a transitive verb to the delivery of a line of dialogue, referencing both the speaker and the target. For example, a line might sound very differently read with the action ‘I frighten you’ than if it were read with the action ‘I console you,’ yet both might be perfectly legitimate readings of that line. Each action choice creates a new context for performers to react to; the process of actioning suggests agreeing during rehearsal on a pattern of actions to be performed every night. The actions chosen might usefully connect to the theme of the play as well, to provide subconscious emotional and thematic prompts to its audience.

The reason The Actor and the Target is such a seminal manual is because it helps an actor to simplify and clarify what they’re doing when they’re not talking. By specifying what they hope to achieve, and the verb by which they go about trying to achieve it, they open themselves up to the possibility of success (‘felicity’) – or failure (‘infelicity’) – lying in the other actor’s line. Games have rarely engaged with this suspenseful moment between choices; indeed, contemporary titles still struggle with what to do with character animations while waiting for player input, resulting in a ‘waiting at a bus stop’ feel during more fully animated games.

Towards Expressive Input for Character Dialogue in Digital Games (Nick Junius, Michael Mateas & Noah Wardrip-Fruin)

This paper, written by Masters students at UC Santa Cruz, reviews traditional game dialogue inputs and compares them to the work of performance theorists Konstantin Stanislavski and Bertolt Brecht, as well as the traditional Japanese noh theatre, in order to identify avenues for designing dialogue inputs that allowed for greater expression.

The writers share my puzzlement that players, more often than not, are cast in the role of the writers and directors of dialogue scenes, rather than as actors playing a role. Of particular interest to me was their proposition surrounding animation – that, given the advances in procedural techniques, it would not be difficult to have animation responding to different types of Stanislavksian emotional ‘energy’ given off by players due to the quality of their input, in order to create a truly responsive scene. 

Takeaways

This research led me towards some clearer definitions of how I would like to represent or explore Austin’s work in a game. I would like to:

  • Open up the ‘illocutionary space’ as an arena for play.
  • Reassess the importance of ‘plot’ and ‘progression’ in making a player feel like part of a dialogue.
  • Increase player sensitivity to the proximity and energy of their ‘partner,’ using ‘impel / repel / compel’ and ‘accept / reject / deflect’ as rubrics.
  • Encourage the player to (either consciously or subconsciously) define the effect they would like to have on their ‘partner’, and find an emotional reaction in whether the partner response is in line with or opposed to that effect. 
  • Experiment with a more sensitive and expressive system than ‘clicking.’

With these objectives in mind, I can begin work on the game proper.

Collaborative Project: 2D Lighting

This was my first attempt at using a 2D lighting system, and I found the process to be both challenging and creatively invigorating.

To first understand how to implement 2D lighting systems, I watched a handful of tutorials, collected below.

Brackeys, 2D Lights in Unity!, Available at <https://www.youtube.com/watch?v=nkgGyO9VG54>, accessed on XYZ
Unity, 2D Lights and Shadows in Unity 2019! (Tutorial), Available at <https://www.youtube.com/watch?v=F5l8vP90EvU>, accessed on XYZ

Point Lights & Global/Freeform Lights

Point lights are like lightbulbs you can place in a scene, lighting up the surrounding area. I used these at first to give the player a glow that would illuminate objects it approached; I liked the effect so much I attached a point light to the fish prefab, which would alter its radius to match the size of the instantiated fish. This had the effect of bringing the schools of fish much more into the ‘foreground’ – they really felt like they were interacting with the player on the same ‘layer’, rather than moving behind them.

Fish and player point lights

Global lights increase the ambient lighting in a scene, while freeform lights create global lighting effects over a specific area. I used both of these to create a general ‘wash’, and to separate each ‘strata’ of the level into a slightly different texture.

I also started using lights for more direct effects. I added extra point lights tied to triggers to bring players closer to, then warn them away from the anglerfish; a freeform light at the bottom of the sea to give a gradient effect of ‘deepening’, and point lights at the entrance to the narrow tunnel and around the collectible at its end to draw the player to these locations.

Anglerfish ambush

Sprite Lights

Sprite lights are used to create lighting in the ‘shape’ of a sprite – sometimes parametric or freeform lights won’t be flexible enough, or provide enough detail. Xinyu had already provided me with two different ‘god ray’ sprites, which I had lain over the gradient ocean background in the prototype scene. By applying these to sprite lights, I was able to create a natural, directional brightness moving from the top corner of the scene (or, the surface of the ocean) to the bottom (or seafloor).

One of the ‘god ray’ images
You can see the player sprite ‘catching’ the different rays as they move

A simple cosine movement script created a tidal, dappling light effect over both the coral and the deep ocean floor.

Subtle dappling effect on the reef to the right

Normal Maps

The Brackeys tutorial also introduced me to normal maps – flat textures that approximate three dimensional surfaces. By now I had definitely spent more time on lighting than I had budgeted for, but I suspected that Xinyu’s detailed sprites might look even better if they caught light three-dimensionally. Plus, I was a tad underwhelmed by the effect of the player’s point light on the edges of the coral, so I decided to experiment with using normal maps.

Introducing normal maps was relatively easy – I used a program called CrazyBump (unlike Photoshop, Affinity Photo doesn’t come with tools for bump/normal/height mapping) to map all of the sprites, then assigned these maps as secondary textures in the Sprite Editor. It took a bit of work to generate maps for every sprite in the game, and there were some sprites that needed to be reimported, or have their transparency layers cleaned up outside of Unity first.

I immediately noticed a trade-off in brightness and colour saturation, but the detail and ‘pop’ that the light now gave the objects, as well as the realistic moving shadows, were well worth the work. Xinyu shared her happiness in how the sprites now looked.

Conclusion

Out of all the design elements I implemented in this project, I think these lighting systems had the most impact. Reactions among playtesters and viewers of gameplay footage were more positive once the lighting had been introduced, and I was pleased with how it helped to differentiate between the level strata. As observed in the environmental design case study of Abzu, lighting and shadows play an important role in communicating where a player is underwater; I enjoyed putting these design lessons into practice, and discovering how easy the system was to use.

Collaborative Project: L-Systems

One of Xinyu’s earliest sprite submissions was the seaweed spritesheet – it contained long strands of curved seaweed, with bulbous ‘heads’.

I initially animated these with a simple cosine ‘tidal’ script, so they appeared to bob and sway in the water, but I was keen to try something more complex.

I asked Xinyu to chop the sprites up into smaller units. My idea was to use Unity’s 2D physics to create ‘chains’ of these units, anchored to the seabed; if the ‘head’ of the seaweed was to move, it would cause the rest of the organism to move in a cascading, ‘natural’ way.

This proved much harder to implement than I had expected, perhaps due to my still incomplete understanding of Unity physics and its Joint objects. Additionally, even if I had been able to get a single chain of seaweed functioning properly, the workflow of hinging together piece after piece was far too laborious.

Luckily, Zans had covered L-Systems with us the previous week, so I created a new ‘dev scene’ to experiment with making L-System seaweed formations.

Step 1 was creating a ‘root’ object – this would be a prefab that could be instantiated repeatedly. It contained a sprite – the piece of seaweed I wanted to render – and a transform – for the next instantiated object to use as its position.

Step 2 was defining an axiom – the rules by which the seaweed would know how to form itself. I created a short set of instructions, then instructed the script to iterate over them 2 or 3 times – I wasn’t keen on overwhelming formations with 4 or more iterations!

My initial experiments produced bush-like formations that looked much happier on land. This was due to the axiom randomising its angles – sunlight is all around plants on land, so leaves grow in various directions and angles in order to reach it. Seaweed naturally grows upward, as it is buoyed by salt water and seeks the sun, which is above, so its angular formations are much more uniform. I swapped in some new instructions: if the program came across a + or a -, it would set the angle of the next instantiated object to be between x and y (or -y and -x) degrees, with x and y being exposed floats I could tweak in the inspector.

This gave new formations a ‘natural’ logic, more akin to Prusinkiewicz and Lindenmayer’s sketches and axioms featured in The Algorithmic Beauty of Plants.

Prusinkiewicz, P and Lindenmayer, A, The Algorithmic Beauty of Plants, Springer-Verlag NY 1990, p25, viewed 18th February 2021, <http://algorithmicbotany.org/papers/abop/abop.pdf>

This setup allowed for a fast and productive workflow – I could generate seaweed and coral prefabs quickly, and made five prefabs in no time at all. Of course, I had forgotten to attach movement scripts to the sprites themselves, and had to add one to each sprite! Thankfully, Unity’s search function made this relatively painless. I updated the ‘branch’ prefab with the movement script, so that any future L-System prefabs would be pre-scripted.

Overall, I ended up with fifteen prefabs of different colours, sizes and textures. I was able to resize and rotate these in-scene to create more fibrous-looking seagrass, as well as using some placeholder rock sprites to create clumpier coral growths. 

Some of the prefabs implemented in-game – bump-mapping and a lighting system really brings them to life!

Were I to revisit L-Systems again, I would look at coding a randomiser that would alter the iteration recipe, so that on pressing space I would have a brand new formation, not just the same underlying structure with different angles. I would also fully create multiple branch prefabs first, then use a List to randomise which branch a new formation uses as its root. With those two simple additions, I would have a tool that could output hundreds of unique prefabs an hour, if not more. I hope not to need that many plants in the future, but you never know!

Collaborative Project: 2D Boids

Upon deciding on an underwater environment for the game, Xinyu and I both agreed we would need fish – and not just a single floating fish, as Xinyu had already drawn, but schools and shoals of fish.

Xinyu got to work making some different fish sprites, and I revisited Zans’ tutorial on boids. Boids are objects that react to each others’ position, rotation and speed to create formations, and depending on the variables entered they will appear to move like birds, fish, cars, meteors, pedestrians… The possibilities are endless!

I created a new scene to experiment with making boids, but immediately ran into a problem – the boids I had created for Zans’ tutorial were 3D boids, not 2D boids. 3D boids use different levels of torque to reorient themselves towards alignment, cohesion and separation vectors, with the averaging out of these forces returning a forward direction. Applying torque onto a 3D object just means reading its position and direction as Vector3s, but finding the angle towards which to apply 2D torque requires a lot more maths. I spent half a day trying to brute force my way through this translation, but to no avail. I would need to start from scratch.

I watched two tutorial series to get a grasp on the practical differences between 2D and 3D boids. Obviously, the principles of Alignment, Cohesion and Separation remained constant, but how they were calculated differed.

Renaissance Coders, Unity 2D Artificial Intelligence Flocking Introduction Available at: https://www.youtube.com/watch?v=YLxV5L6IaFA (Accessed: 13 February 2021).
Boards To Bits Games, Flocking Algorithm in Unity, Part 1: Introduction, Available at: https://youtu.be/mjKINQigAE4 (Accessed: 13 February 2021).

By following the tutorials, I gained a practical understanding of how to alter the velocity of 2D objects by combining functions for Alignment, Cohesion and Separation. The results were pretty good, but I thought there was room for a lot of improvement:

Boundaries

The tutorial I had followed left me with a wrap around environment – fish would hit a horizontal or vertical limit and be instantly transported to the opposite limit.

This was easy to code, but I wanted something more natural.

Now that I had a grasp on how applying different vectors worked, it didn’t take me long to come up with a solution. If the distance between the boid and the tether point became greater than 90% of the tether radius, then a vector pointing back to the tether would be applied at a strength increasing proportional to the distance (by multiplying this strength by itself, I read, a more natural-looking effect could be achieved, as smaller values would affect the object more slowly, and greater values faster).

By maintaining my code as separate functions, as suggested by the tutorial and by Zans, I found it very easy to add in new variables. It was just a matter of writing this new function called Tether(), outputting a Vector3, then referencing that in the Combine() function.

Avoidance

The final piece of the puzzle was implementing more natural avoidance. So far, boids reacted to an ‘enemy’ object by reversing their trajectory, or ‘doing a 180’ (you can see this in the above video). This seemed unrealistic for ocean-going bodies, so I did some quick research on calculating different angles from 2D objects.

By using cross product of the boid and enemy’s positions I was able to determine on which side of the member the enemy object stood; then, it was just a matter of creating a perpendicular velocity vector in the opposite direction (so 90 degrees left if the enemy object is on the right, and vice versa if on the left).

I put this code into a new RunAround() function, and made sure the Avoidance() function pointed to RunAround() instead of RunAway() – separating out my functions like this happily meant I only had to replace one declaration in the code!

After a bit more weighting, the resultant avoidant behaviour seemed much more realistic – fish no longer handbrake-turned away, as they would from a shark, but rather swam around the player. This had the added benefit of bringing the fish generally closer to the player, allowing more of them to be caught in-camera.

Clustering

Finally, I wanted the fish to be able to ‘cluster’ around a point or an object, primarily as a way to direct the player towards a collectible object. This was simple to achieve: with the object or position as a child of the fish controller, and a Circle Collider acting as a trigger, I could send a signal to the fish controller that the school’s ‘behaviour’ (a string variable, though it could just as easily be a bool – I left it as a string in case I want to re-use this code for more complex behaviour in the future) has changed, and also let the MemberConfig script know that a change has taken place with a public Switch() function.

Within the MemberConfig script, if Switch() was called, the variables would be set equal to the ‘free’ or ‘cluster’ versions, depending on which behaviour had been set by the fish controller script. Because all of the variables were applied using priority weights, and were generally <1 in size, the effect of changing behaviour appeared quite natural.

Conclusion

I was really pleased with the results, and glad that I spent the time understanding these algorithms. Giving environmental elements behaviour that reacts dynamically to player position and input can really heighten immersion, and make for a much more compelling aesthetic than just animating fish into an unresponsive background layer.

Designing & Prototyping: 18-Card Game (Week 5)

And by ‘Week 5’ I mean all anything I did between the end of term and the submission date.

Example of Play:

First, I made a video of myself playing a four-player game. The birds-eye view was achieved by balancing my phone on a hanging clothes horse – hence the wobble!

Final Tweaks:

Before laying out the rules, I contended with some of David’s feedback on the previous draft – specifically the problem of ‘game decay.’ His suggestion was to remove cards as play progressed, but I suspected that I lacked the time to properly balance that kind of intervention.

My solution was to implement a temporary discard mechanic. I had already come up with the idea of a separate stack of cards, used when the 18-card deck couldn’t be split evenly between players. If any cards were in the discarded stack at the beginning of a round, one would be drawn and placed face up to begin the round’s burger stack. Obviously, this pile was only used for a few rounds.

The idea came from playing Magic: The Gathering Arena, and specifically this card, which causes a player to draw two cards then discard one.

By adding a discard to that pile every time a player received cards (when they incorrectly repeated the order, or when another player gave them cards on correctly repeating the order), the discard pile would always be full, and with every round at least one card would be rotated out of play. If a player chose to split their correctly claimed cards up between multiple players, the pool of cards in play would decrease, as would the predictability of action.

Unfortunately, I haven’t had time to playtest this addition – something for the future.

Laying Out the Rules:

I used Affinity Publisher to lay out the rules, keeping in mind what I’d learned from the first project of the term. I used icons from game-icons.net to add some visual flair.

I also laid out the print-and-play sheets – quite a simple job, just aligning nine images on an A4. If I develop this game further, I will look to a clearer black and white design, as at the moment it’s a little indistinct.

Designing & Prototyping: 18 Card Game (Week 4)

Feedback session (14/12/2020):

During the feedback session I shared the results from the playtest, and also some of my concerns re: the balancing. David reassured me that the game was already in a submittable state, and that some of these balancing questions could happily be solved if I wanted to work further on it after hand-in. I still felt like I could get a cleaner flow out of the game though, and resolved to playtest it further.

Notes from playtesting (Week 4):

I played the game online with my partner and friends over a few afternoons. Lots of positive comments, but I observed the following:

  • Once players have the basic rules down, the ‘correct’ way to play is to always claim stacks as soon as they become available – winning a stack is so valuable, as it both sheds your cards and jams up other players.
  • As a game for two players it’s a little dry – I’d like for there to be more surprise and strategy, as in UNO…
  • There’s definitely potential for play to continue more or less infinitely, especially between three or more players. I need to work out a way of finishing play, but the size of deck doesn’t really combine well with discarding entire stacks (if a stack of 6 is discarded, that’s only 12 cards to play with, and if the objective is to make the biggest stack, well, anyone who makes a stack of 7 has basically already won).

Questions going forwards:

At the end of four weeks of development, I’m still uncertain as to the rhythm of play, both on a turn-by-turn level and over a whole game. The game seems either too easy or too difficult to win, and the rhythm of the rounds seems too predictable. I think the game as a whole is submittable, but I’d like to come down on a much cleaner direction, either before or after hand-in. Luckily, there is a week of tuition before the hand-in, so I plan on making some different versions of the game to playtest for then.

There are a few directions I think I can go in:

  • Increase the speed and intensity of the game – like Spit or Shithead, the objective is to play fast and hard.
  • Add an additional stack, maybe one per player – this would increase the memory load, which might be a good or a bad thing for difficulty.
  • Remove the SNAP! mechanic, and rebuild it as a turn-based game. This would enable the cards to each have different abilities – maybe ‘casting’ a pickles card makes someone read the claimed stack backwards?
  • Remove the shedding mechanic entirely, make the goal about getting either:
    • The tallest stack, or
    • The most stacks in a time limit.

The game should also be redesigned to work in black-and-white – currently I’m using colours to communicate whether a card has a veggie, a sauce, or bacon on it, but most print-and-play players will likely be printing monochrome. I’ll need to come up with some symbols, or better illustrations (my designs are still very placeholder-y).

Lots of work still to do!

Designing & Prototyping: 18 Card Game (Week 3)

Week three progressed much better. After having stressed about the unviability of the object stacking idea, I spent an hour before our consultation with David iterating on the basics of the project, but inserting more limitations. How could I keep the narrative and visual elements of this burger stack idea, which I liked, but engage more with the mechanics of card games? Why have objects at all when the cards themselves can be stacked?

The game switched, then, from a game of competing teams to a game of competing players – I still hadn’t figured out the rules, as such, but I knew that play would involve players taking turns to place, from their hand, a new topping onto the ‘burger’, and that points might be awarded for its eventual height. I also played with the idea of making a memorisation mechanic, so in order to ‘claim’ or ‘bank’ a finished stack of cards, the player would have to repeat back the order in which the cards were placed (kind of like line cooks repeating orders back to waiters on the pickup!).

My concern about this new direction, however, lay in the number of cards – UNO and the other card-stacking or -shedding games often deal in very large decks, and I only have access to 18. If, say, two are reserved for the top and bottom of the burger, then that’s 16, which results in a very short game between 3 or more people…

Feedback session (07/12/2020):

I pitched this new direction to the group, along with the above caveat about deck size, but David didn’t see this as a problem, especially for a game with a 5-10 minute playtime, and the group enthused on the new direction. David suggested a mechanic like Snap!, wherein each player can ‘claim’ the stack by slamming their hand on the top of it, thus stopping the game from being too rhythmically predictable. This seemed like the last piece of the puzzle, and I left the session feeling positive about iterating a new ruleset, as well as with two games to research (Jazz and Them’s Fighting Words).

New ruleset & playingcards.io ‘room’:

Cleaving to David’s reminder that this game should be quick to play, I quickly bashed out a new ruleset and set up a way to playtest the game digitally. Making a ‘room’ on playingcards.io was way easier than I expected, and I made use of a lot of the platform’s functionality. Below is a visual record of me setting up the room ahead of playtesting.

The initial board state for three players, with a ‘deal’ button that splits the deck evenly
‘Hand’ functionality after the first draw
Now the stack is flipped, but the player still needs to individually turn each card
The final layout, with player scores on the bottom! This was so easy to set up!

Playtesting with Arthur and Jacky (10/12/2020):

Notes, questions and reflections:

  • Immediately, the ‘goal’ of the game needed more clarity and urgency, so I introduced a shedding mechanic – the goal is to be the first person without any cards in their hand. By claiming and repeating a stack correctly, you can assign the cards to other players, but if you repeat a stack incorrectly, then you pick up all the cards.
  • What about ‘incorrect’ stacks? We played around with some restrictions on placing cards: no doubles, doubles allowed, stacks are claimable only if they have at least one burger and one sauce, stacks are claimable only if they have one burger… 
  • What happens when you can’t go? We tried drawing extra cards (fussy), being forced to claim the stack (too punishing), but the best solution was just passing the go to the next player.
  • Control: if stacks need burgers to be claimed, then the player with the most burgers controls the pace of the game. Given that players receive these cards when other players successfully claim a stack, and no cards are currently discarded, it puts advantage in the losing players’ court, and runs the risk of game decay (play proceeding forever).
  • It became apparent that claiming the first stack was often a surefire win – the player then only had a few cards to shed in order to claim victory. I will need to look into balancing this, or making stacks more difficult to claim / winning them less punishing for other players. Maybe there’s another pile where cards go? Is shedding the issue here?
  • Size of hand – I want to find a way to limit this. Having a hand of 10 cards works online, but holding them in your hand and making quick decisions in person would be aggravating for players in a losing position.
  • Size of deck – the 18 card limit means splitting the cards into manageable hands in order to simplify play is difficult. For example, there’s a risk that, if the deck were divided into hands of 4, only one burger would be drawn. Or if burgers and sauces were required to make a stack, that no sauces were drawn.

Lots of questions still to iron out, but at least the response to the game was positive, and I enjoyed playing it.

Post-playtesting ruleset:

Designing & Prototyping: 18 Card Game (Week 2)

This week I mostly brainstormed the initial idea for the stacking game.

The ‘cards’ are circular, and represent different ‘layers’ of a burger: bun, lettuce, sauces, bacon, cheese. Written on each one are a series of prompts that could describe objects in a room: thick, squishy, flammable, living. Players are split into two teams, and must compete to build the tallest ‘burger’ by stacking objects that correspond to one of the prompts, and sandwiching them between the cards. For example, the first card placed is always the toasted bun card – players pick or are assigned at random a prompt from the bun card, and have to stack objects on top until time runs out. Then they place another card (say, lettuce) on top of these objects, which assigns them a different prompt. The timer starts again, they stack until it stops, then place another card on top (say, cheese). This assigns them another prompt, and play continues in this fashion until the final timer runs out and the sesame bun tops the stack. The team with the tallest freestanding burger wins.

I came up with a few solves for randomising the prompts: the opposing team could pick the prompt to ensure maximum difficulty/hilarity, or the cards could be flipped in the air onto a surface, and the prompt closest to a particular edge or extended finger could be selected.

After messing around with different prompts and failing to get any surprising stacks, I simplified the restriction to different letters. The game now relied on players being able to define objects by different names so they can be legally stacked – there’s a little more invention in this approach, rather than just physical dexterity. This mechanic also feels more suitable for children, and inspired a cool name: Alphaburger!

Reflections and Feedback:

I spent a day playing around with stacking using Alphaburger‘s draft rules. One of my chief problems was why this needed to be a card game at all – was I just making a complicated reason to stack objects?

A second worry was the consistency of play – surely clever players would just place square/flat objects, and aim to win on stability rather than height? A restriction on object size could be made – for example, the bottom-most object must fit entirely within the bounds of the card – but the idea of making rules for what can be stacked and when feels like it would drag the pace of the game.

On revisiting other stacking games like Jenga and Beasts Of Balance, I found that they shared a consistency of shape and material which narrows the predictability of the game. Even stacking objects on my desk and in my room, I naturally selected objects which were more solid and predictable, otherwise the game was more likely to end instantly.

During our feedback session, David observed that the cards could be used as individual prompts rather than physically involved in the game, but again, this just seemed like a weak implementation of actual cards. He also observed that the material of the cards (most likely paper) might not suit the kind of flipping that I was describing. I agreed.

Ultimately, I think it likely that this game plays better in abstraction than it does in reality.