Final Major Project: UI and Player Feedback

As I moved into the final weeks of the project, I began sharing builds of the game with people. I was interested to see how people reacted to it with no tutorialisation – I knew that I no longer had time to finesse particularly elegant versions of my original example outputs, but I hoped that player feedback would bring my attention to some immediate ways I could better engage players.

Feedback on the whole centred around feeling confused and overloaded – some players would have preferred less written information, some more. Most wanted to interact with the game at a more moment-to-moment level, or a way to dig deeper into the repercussions of their actions.

To solve the informational overload, I return to my case study of Dwarf Fortress, and my original notes on the different kinds of ‘text’ one creates when one is directing a play – stage directions, notes on performance and notes on internal motivations. By splitting up these types of information – as Dwarf Fortress does with its thoughts, combat and critical events – and tying them to different mode of player interaction, I could both lessen the player’s mental load and provide ways for them to gain new perspective on the action.

One simple UI solve was making the environment lightly interactive – by hovering over an Area, it would change colour and output its name, signalling that it was available to be selected. In the future I imagine adding other small ways for the player to interact with the environment – maybe changing the colour of the moon (as Stanislavski describes altering the lighting states of his theatre exercises in An Actor Prepares) or adding different props to the scene.

I also rewrote how the game processed its written output – if the player was focusing on an actor, it now considers whether an Action concerns them directly, then outputs it to a reader at the bottom of the screen; likewise, if a room is selected, all the Actions inside it are output to the left-hand scrolling readout. If the player is focused on nothing, no written output is communicated (though it is all still saved in a master debug log).

Playing through the game in this state with my tutor, he reflected that he still didn’t have a great idea of what was going on. Aside from a tutorial, I imagined that a greater degree of audiovisual feedback would help with players making sense of the procedural activity. I quickly recorded some footsteps sounds (in a cat litter tray for outside, yuck), did some ‘item interaction’ foley work, and brushed up on my Russian to give characters individually pitched voice lines. I also tied some basic animation to the Actors’ sprites, triggered whenever they ‘speak’ to each other. Field recordings of Russian National Parks, as well as an archive recording of Schumann’s ‘Two Grenadiers’ (one of the songs sung by Sorin during the play) provided the final piece to the (now slightly comedic) puzzle.

Upon showing the newly polished game to people, I was pleased to hear that most restarted after one playthrough and experimented more on the second playthrough with the different UI ‘lenses’. Though I hadn’t found the time to increase player engagement through deepening systems, I had been able to do so through some scrappy UI work – I will make sure to bring this kind of design into my projects at much earlier stages, as it really helped me make sense of the experience I was developing.