Critical Play Project: UI, Text and 3D Animation

Having identified exposed systems and UI feedback as areas to explore, I set to work in two directions: developing feedback around text and UI, and learning how to rig a humanoid 3D model for procedural animation.

Text & UI

My initial designs for the scenes didn’t change much – I think subconsciously I was already influenced by the radial menu of Mass Effect, so the bottom of the scene became the location of the ‘choices.’ I experimented with bringing the choices more into the mise-en-scene, like in Fallout 4, but with few non-textual elements (like sprites and backgrounds) to consider the frame became unbalanced. I think I was also assuming a phone-shaped viewport (9:16 rather than 16:9), so arranging the text along the vertical eyeline made more sense to me.

In terms of feedback, I experimented with fading both the choice text and the story text towards ‘impel’, ‘compel’ and ‘repel’ colours in order to expose the possible outcomes. The most logical to me was having the choice text fade towards the choice, and the story text fade towards the likely reaction, but players found this to be confusing, as the story text wasn’t the thing that would react. I opted for some rotating circles that would grow with the likelihood of each choice; this seemed to make more sense to players. Following Disco Elysium’s example, I also changed the colour palette to something a bit more striking.

Ultimately, the rotating circles just seemed a bit nonsensical, so I opted for introducing a gradient background that would shift colour proportions depending on the likelihood of the previewed outcome, and fade to the ultimate reaction once a choice had been made. I also intended to add a ‘splash’ effect inspired by DE’s dice roll feedback, but rather than using it to communicate which response the weighting algorithm had chosen, I made it check for whether the response was in line with the prediction – i.e. whether it was a felicitous or infelicitous response. However, with Unity’s shader graph system giving me trouble and time pressures mounting, I was unable to get these to look slick enough, and had to cut them before submission.

To expose the ‘actioning’ system, I floated some new text boxes in the scene, which would be filled by actions randomly selected from certain lists, depending on mouse position – this was simple enough, but timing their appearance and disappearance took some experimentation. If they disappeared too quickly, players didn’t fully absorb them; if they stuck around too long, they over-coloured the next response.

3D Animation and Rigging

For scene four I had set myself the challenge of animating a ‘scene partner.’ I didn’t expect to be doing any facial animations, but I did need something with a head and shoulders, and I didn’t feel like a 2D sketch would be able to capture much subtlety without a lot of work. So I searched the Unity Asset Store for some free 3D models; most were fantasy or science fiction creatures, or young women in the anime style, but I did find one asset pack of suitable models, from a game called Distant Lands.

These models came with their own rigging data, and one of the comments praised the pack for its use in rigging practise, so I downloaded it. With Unity’s own rigging package, accessing these free models’ bone systems was a two-click process, and I had soon programmed the head to follow a ‘tether’ game object, which would move in relation to the ‘distance’ variable in the Ink script. By tying the x, y and z positions of this tether to different variables, and having the head lerp towards the tether rather than snap to it, I had created a rudimentary but effective procedural animation. 

Now for the body. In my sketches I had only considered the head and neck, but if they were the only moving parts the model looked extremely strange. I needed to move the chest as well. I didn’t have time to investigate particularly complex approaches, but my solution was satisfactory: I created two new rigs, both affecting the same piece of the model’s ‘spine.’ However, one would rotate the spine around the Z-axis normally, and the other would do so inversely. The effect would be of someone ‘opening’ or ‘closing’ their chest to something. Next, I wrote some code that would alter the ‘weight’ of each rig, tied to the partner’s ‘attitude.’ The further into the negative the attitude, the more heavily weighted the inverse rig; vice versa for the positive. This connects quite nicely to the Santa Cruz paper’s coverage of Stanislavskian approaches to animation – by ‘opening’ or ‘closing’ the model’s chest depending on its attitude, I could give the impression of different kinds of performative ‘energy,’ which not only coloured the impact of player choice, but also the next line of partner dialogue! (Of course, this only works with unvoiced lines – a voice actor would either have to record nigh-infinite emotionally ‘tinged’ responses, or lock the responses down to only a few.)

The only thing missing was a sense of ‘breath,’ but this was a doddle. I added a script that moved its ‘facing’ tether up and down on a cosine function. If I were to dig further into this project, I would investigate tying the frequency and amplitude of this script to some of the reaction values, in order to mimic emotional changes in breath.

Now I had a reactive model, but I found that player attention on the previewed text had fallen off. Players needed the preview feedback in this scene to be tied more closely to the central visuals – I needed to re-expose the possible responses. I duplicated the models and connected each one to the three different ‘likely’ variables, giving each a different degree of transparency.

When the preview function was called, each would shift towards the likely outcome, as well as change mesh colour. The effect was ghostly, and a little awkward, but seeing these holograms respond to mouse position gave some good gamefeel, and watching the ‘real’ partner model merge with one of them gave a clear indication of the narrative and emotional outcome of a choice.