Final Major Project: Critical Reflections

Personal aims and objectives

For my final major project I chose to create a procedural narrative game – one that would attempt to stage a rehearsal of Chekhov’s The Seagull, but without the input of the writer or director – titled Chekhov’s Gone! This project was inspired by the work I had performed over the summer on my thesis, which looked at game dialogue design through the lenses of time, space and interface. By presenting a diverse number of dialogue designs and interfaces – summary vs scene, spatially segmented vs fixed perspective (Wei et al, 2010), integrated vs superimposed and ludic vs fictional (Jorgenson, 2013) – I hoped to create something that could be enjoyed by many types of player personae, not just those that enjoy reading a whole bunch.

As ever, I was keen to stretch my technical side by learning some new skills, and I set my sights on something that had intrigued me from earlier in the course: agent behaviour. This is a core part of one of my all-time favourite games, Dwarf Fortress, which also features heavy procedural generation and emergent narratives – I was excited to see what I could learn from it.

Playthrough of Final Prototype

Summary of development

My research into this project was mostly through my thesis – over the summer I had gathered a rich range of inspirational games that used dialogue in different ways, and I was excited to create something that repositioned a player’s understanding of dialogue design. I used tutor input to help me select my direction, which was for a Lemmings-style agent behaviour game based off a theatre rehearsal.

The project reminded me a lot of Dwarf Fortress – in a much simplified version – so I also performed a case study of that game, using the frames identified in my thesis to analyse its use of time, space and interface in communicating dialogic and narrative information to its players. Tarn Adams’ designs for dwarven thoughts and preferences, as well as Stanislavski’s work on actor training provided the framework for my initial designs for agent behaviour and interaction. However, by the time I had a clear idea of how to proceed with any of this, I had only seven weeks left to make the project!

In the first few weeks I focused on setting up an efficient pipeline for adding different agents and objects to the game. This involved understanding Scriptable Objects and how their data can be fed into class instances at runtime, as well as some .csv translation that would enable large amounts of objects to be created in an accessible spreadsheet format and loaded in batches. The learning curve for this was quite high, but I now feel a certain confidence in coding modular foundations for games in general, which is good.

The majority of my development work then went into designing and coding behaviour loops for the game’s agents. This was a more enjoyable challenge than the aforementioned more tools-based work, and I was surprised at how well some of the approaches from earlier MA projects fitted this one. Due to time pressures I wasn’t able to reach the level of behavioural complexity that I was seeking, nor was I able to implement much of a procedural text generator to connect to it, but the modularity of my loop leaves room for further iteration and expansion.

In the final stages of the project I began soliciting player feedback from friends, coursemates and tutors, and most of it drove me towards developing and sharpening the game’s UI. This synthesis of feedback and late-stage experimentation was a crucial step in reconnecting me to my original creative drive, and much of the game’s ‘game-iness’ came from these final buttons, sounds and animations. Game design, as ever, remains a highly holistic discipline – once I had even a sprinkling of visuals and audio, I could see how best to finish the final prototype.

Critical reflection

In terms of organisation, there are a lot of things I would improve. Waiting to reach the conclusion of my thesis before beginning my experimentation, as well as moving house a lot of times over the summer, cost me a lot of time. My initial work on making an efficient content delivery system in many ways distracted me from reaching the iteration part of my design. In retrospect, the way that I set up some of my classes would also prove difficult – although Actors and Items were happy to be serialized in the inspector, Areas were not. This is due to them containing lists of other Areas, which of course contain lists of other Areas – the level of recursion was too much for Unity to handle. This caused me difficulty when debugging – although it was far easier to access connections between these classes in the code itself, it might have been more valuable to be able to observe their behaviour more clearly in-game. If I had realised this earlier I also wouldn’t have burned three crucial days towards the end of the process trying to make an impossible-to-implement-given-my-data-structure rewind mechanic. When considering my work in the context of the game development life cycle (Widyani, 2013), I spent too much time in the foundational stage, and too little in the experimental and structural stages. This left me with an unbalanced product when I approached hand-in, and forced me to begin refining work that I didn’t necessarily consider to be finished.

While the idea behind Chekhov’s Gone! stood thematically and formally within my comfort zone, mechanically it required me to do a lot of learning. I think I was taken in by being able to talk cogently about how actors function on stage without really understanding the amount of work it would take to render that into a game, but this is itself a very good lesson, especially if I end up working on procedural narratives in the future! Also, going through the process of devlog reflection helped me keep the frames identified in my thesis in mind during the development period, not just in the design period – for works of this potential complexity it is very easy to get lost in the code, something I was definitely guilty of!

Chekhov’s Gone! itself functions at the most basic of its levels; most of its methods are still set to random outputs rather than being truly responsive. I have had to spend the majority of the last few weeks making it intuitive and grokkable for the player, rather than deepening or broadening any of its systems, as I would have preferred – however, this has refocused me on my original designs, and on the inspirations behind the game. Even if it’s not as complex as I had intended, I’m proud of its toy-like level of interactivity and feedback. I’m also reassured by the modularity of its underlying code – between submission and the showcase exhibition I plan to bring it closer to the product I had imagined.

In the future I will be sure to bring UI in a lot earlier, budget more time for design and content creation, and keep the tools designing to a minimum. If this project has been clear case of me playing into Derek Yu’s Inventor archetype, it has certainly been a valuable one.


References

Jorgenson, K. (2013) Gameworld Interfaces. Cambridge MA: MIT Press.

Stanislavski, C. (1937) An Actor Prepares. Translated by Hapgood, E. London: Methuen Drama.

“Thoughts and preferences.” <https://dwarffortresswiki.org/index.php/Thoughts_and_preferences> (Accessed on: October 15th, 2021)

Wei, H., Bizzocchi, J. & Calvert, T. (2010) “Time and Space in Digital Game Storytelling” in International Journal of Computer Games Technology, Vol 2010.

Widyani, Y. (2013) ‘Game Development Life Cycle Guidelines.’ Proceedings of ICACSIS 2013.

Yu, D. (2021). Indie Game Dev: Indie Archetypes. <https://www.derekyu.com/makegames/archetypes.html> (Accessed on: November 22nd, 2021)

Final Major Project: UI and Player Feedback

As I moved into the final weeks of the project, I began sharing builds of the game with people. I was interested to see how people reacted to it with no tutorialisation – I knew that I no longer had time to finesse particularly elegant versions of my original example outputs, but I hoped that player feedback would bring my attention to some immediate ways I could better engage players.

Feedback on the whole centred around feeling confused and overloaded – some players would have preferred less written information, some more. Most wanted to interact with the game at a more moment-to-moment level, or a way to dig deeper into the repercussions of their actions.

To solve the informational overload, I return to my case study of Dwarf Fortress, and my original notes on the different kinds of ‘text’ one creates when one is directing a play – stage directions, notes on performance and notes on internal motivations. By splitting up these types of information – as Dwarf Fortress does with its thoughts, combat and critical events – and tying them to different mode of player interaction, I could both lessen the player’s mental load and provide ways for them to gain new perspective on the action.

One simple UI solve was making the environment lightly interactive – by hovering over an Area, it would change colour and output its name, signalling that it was available to be selected. In the future I imagine adding other small ways for the player to interact with the environment – maybe changing the colour of the moon (as Stanislavski describes altering the lighting states of his theatre exercises in An Actor Prepares) or adding different props to the scene.

I also rewrote how the game processed its written output – if the player was focusing on an actor, it now considers whether an Action concerns them directly, then outputs it to a reader at the bottom of the screen; likewise, if a room is selected, all the Actions inside it are output to the left-hand scrolling readout. If the player is focused on nothing, no written output is communicated (though it is all still saved in a master debug log).

Playing through the game in this state with my tutor, he reflected that he still didn’t have a great idea of what was going on. Aside from a tutorial, I imagined that a greater degree of audiovisual feedback would help with players making sense of the procedural activity. I quickly recorded some footsteps sounds (in a cat litter tray for outside, yuck), did some ‘item interaction’ foley work, and brushed up on my Russian to give characters individually pitched voice lines. I also tied some basic animation to the Actors’ sprites, triggered whenever they ‘speak’ to each other. Field recordings of Russian National Parks, as well as an archive recording of Schumann’s ‘Two Grenadiers’ (one of the songs sung by Sorin during the play) provided the final piece to the (now slightly comedic) puzzle.

Upon showing the newly polished game to people, I was pleased to hear that most restarted after one playthrough and experimented more on the second playthrough with the different UI ‘lenses’. Though I hadn’t found the time to increase player engagement through deepening systems, I had been able to do so through some scrappy UI work – I will make sure to bring this kind of design into my projects at much earlier stages, as it really helped me make sense of the experience I was developing.

Final Major Project: Behavioural Loops for Agents

This devlog covers my work on the agent behaviour for Chekhov’s Gone! Broadly speaking, the emergent behaviour that occurs in Chekhov’s Gone! comes out of Actors performing Actions in pursuit of their Objectives.

Research

Procedural Storytelling in Video Games (2019) was of particular use in approaching this area, especially Tarn Adams’ chapter on Emergent Narrative in Dwarf Fortress. I made particular note of his provocation to procedural designers: ‘produce an example output and ask “what’s the least I can do to generate more of these?”‘ I don’t for a minute think I’d be able to procedurally generate dialogue on par with Chekhov, but one thing that has always struck me about his stage work are the specificity of his stage directions (many are often cut from the original Russian, and indeed many of those were incorporated from Stanislavski’s directorial notes).

So, an initial gesture might be to generate something like this:

Konstantin squeezes [Dorn’s] hand hard and embraces him.

Chekhov, A., trans. Frayn, M. (1986)

Or:

[Dorn] takes the snuffbox from [Masha] and flings it into the bushes.

Chekhov, A., trans. Frayn, M. (1986)

I followed Adams’ exercise of breaking down these examples on the hunt for design implications:

  • Konstantin / Dorn – characters in the play, but what do they need beyond that? Is it as complex as the thoughts and preferences of Adams’ own dwarves, as referenced [link to DF case study]?
  • Squeezes / takes / flings / embraces – verbs of physical interaction are probably the chief means of communicating meaning in this simulation, so I should have a wide variety. But perhaps they could be grouped under simpler actions, and have a more superficial modelling? I am not making a fighting game set in the world of Russian theatre. (But maybe that’s next…)
  • Hand – do I need to concern myself with modelling bodyparts? This can probably be achieved more quickly if ‘hand’ is one of the abstract linguistic components of the action ‘squeeze’ or ‘take’.
  • Snuffbox – items will need to exist in the game; do they need to have functions themselves?
  • From – characters will need to be able to carry items, and other characters must be able to take them from their person; but do they need to observe them having them in order to know that they have them?
  • The bushes – again, I probably don’t need to model environments in their totality; rather these could be tied to action verbs in the abstract.

Performing this exercise gave me some clarity on the depth of simulation I would need to achieve – though still nervous about attempting this challenge, I had some direction on how to start assembling the ingredients for my agent behaviour.

Objectives

As a nod to Stanislavski, who produced and directed the second, most famous production of The Seagull, I would construct the agent behaviour around concepts from his acting theory (Stanislavski, 1937). Rounds would be called units, turns would be called beats – but the real meat of his work is in objectives. Any agent in the game would need to be able to differentiate between Actions that would further their character’s objectives and those of less importance.

So, I created an Objective class. This I conceived as a broad umbrella for many sub-objectives – one might term this Objective class Stanislavski’s ‘super-objective,’ and the sub-objectives his ‘objectives,’ but I won’t be that picky. Each is named around the general ‘thing’ an Actor wants to get done – Masha confessing her love to Konstantin, Sorin convalescing – but contains a number of tasks that need accomplishing in pursuit of that larger affect. These I defined as sub-classes:

  • RoomContains – a class referencing an Area and an Item. The Item needs to be in the Area (this was mostly a test of my Agent’s ability to follow through with tasks, but does map to more servile Actors like Masha bringing the samovar to the drawing room etc.)
  • ActorsIn – a class containing an Area and a list of Actors who need to all be in the Area at once for the objective step to mark complete. This was the most complex objective step to code, as it required a new (and pretty tricky) Action to be created: the Influence Action (more on that later).
  • AloneWith – a class referencing a partner Actor. If the Actor is in a room alone with the partner, the objective step is complete.
  • TalkTo – this is a slightly more dynamic class, as it can reference either an Actor, a Topic or both an Actor and a Topic, and has a dangling bool for whether or not the conversation needs to happen without other Actors in the room.

Already I was finding myself pressed for time, so I banked these as functional and resolved to diversify them further if I found a few extra days on the project.

Agent Interactions

From the above Objectives (and my own early experiments) I had identified a number of Actions that the Agents would need to be able to perform: Move (including Stay), Pick Up, Place, Talk To, and Influence. I defined subclasses of Action for all of these, and wrote a method inside the Action class which would search them up depending on the player’s string ‘tasktype’ (pulled from their currentObjective’s currentStep). Each of these needed a bespoke if-tree to define an Actor’s behaviour, and I was happy to see that once I had written one, the rest were relatively easy to extend. I found the time to write classes and methods for Listen, Ignore, Give and Take as well – many others are obviously needed for Chekhov’s Gone! to move out of prototype phase, but these were enough to start observing behaviour.

Priorities

Trying to create an if-tree that contained all of the different Actions would have been a combinatorial nightmare, so I decided on a system of weighting based on character motivation, predisposition and objective. Basically, the game processes all possible Actions an Agent can legally take on that turn (it can’t move to Areas that aren’t adjacent, it can’t talk to characters not sharing its location etc.), then assigns a small, random initial integer weight to it. Then, depending on the type and step of the Actor’s current Objective, it selects an ‘immediately salient’ Action and weights it more heavily. Once this is done, the list of valid actions gets ordered by weight, and (if the game is proceeding automatically) the top three are returned. A number is generated between 0 and the sum of their combined weights, and this decides for the Actor which of the top three Actions it will choose.

Testing

I had had success with this kind of system during my Critical Play project, but I was torn between the randomness of this system and the clarity of the one that automatically selected the highest weighted Action. In pursuit of answers I set up a test environment that would loop through 100 ‘beats’, output the text from each cycle as a string, and save it as a .txt file – then repeat that five times. This way I could gather a large(ish) sample size and perform comparative analysis – I wasn’t confident that, on showing the game to players at this stage, they would be able to discern the difference between the two approaches. Upon comparing the output logs, it was clear that, while in the short term the automatic selection seemed to create a more cohesive ‘story’, once the characters had completed their objectives (which they did in fairly short order) the game devolved to randomness. The consistent mixture between objective-driven behaviour and random – or ‘incidental’ – Actions present in the weighted system yielded more satisfying results across the board, so I committed to that.

Adding Player Interaction

After burning a whole three days failing to implement a slider-driven rewind system for the game, I resolved to press on with a simpler method of player interaction. By clicking on a character portrait, I imagined the player or director ‘giving notes’, which would either directly select one of the top three Actions or add at least a significant weight to one of them. To achieve this, I placed a boolean stop in the middle of the normal ‘beat’ cycle, which would check to see if the player was ‘focusing’ or not – if not, it would proceed with weighting as normal; if so, it would zoom in on the character in question and display three buttons above the sprite’s ‘head.’ Again, by breaking the behaviour into discrete static functions I was able to add this functionality very quickly.

I have some questions about the level of UI detail here – whether a player need explicit detail about the actions they are choosing, and whether the continued presence of the Action log is useful – but I will pack that up into some playtesting later on in development.

The Final Loop

The finished prototype behaviour loop looks like this:

Compromises

This project in particular has proved difficult to achieve in the time period – I have invested a lot of time in setting up systems for efficiency and expansion, but this has come at the cost of implementing more complex mechanics and deeper content.

For example, to save on pathfinding, I also decided that every Agent would know (or be able to check) the location of every other Agent, and path to it simply through a list of directional neighbouringAreas attached to each Area. The drawing room, for example, leads left to the kitchen, but also left to the upstairs, garden and stage. If an Agent wants to map to the stage from the drawing room, it finds the stage within the list of leftward Areas and proceeds to the Area first on the list, then repeats the process.

Conversation at the moment is also quite a flat system, with Actors sharing a pool of common Topics and loading in with a pool of bespoke Topics, some of which are secret and all of which have a tension threshold, or ‘cost’ – i.e. Actors won’t talk about personally sensitive information unless in a high state of tension. I’d like to deepen this with an attitude and a memory system, but time is too precious.

My modular behaviour loop does enable me to insert or abandon functionality as I go, however, so I have left commented steps for mechanics I have yet to find the time to implement: extra weighting with regards to character traits and attitude, other Actors ‘observing’ actions, Actors refusing a directorial note, and the aforementioned deeper conversational systems. If I get to these mechanics, I should be able to implement them quickly; if not, the behavioural loop will still work.


References

Stanislavski, C. (1937) An Actor Prepares. Translated by Hapgood, E. London: Methuen Drama.

Tarn, A. (2019) ‘Emergent Narrative in Dwarf Fortress‘, in Adams, T. & Short, T.X. (ed.) Procedural Storytelling in Game Design. London: CRC Press, pp. 23-36.

Final Major Project: Scriptable Objects

Even at this early stage of development, I knew my project would need to include interactions between the following kinds of agents and objects: Actors, Areas, Items, Objectives and Actions. I also knew that – especially in terms of Objectives and Actions – I would likely have a lot of them, and that in order to facilitate emergent behaviour they would need to refer to each other in multiple dynamic ways.

One of my main concerns was having to write individual versions of these classes, so I decided to use Unity’s Scriptable Objects. I was quickly able to draft some Actor Scriptable Objects with the following variables:

  • name (string)
  • tension (int)
  • status (int)
  • holding (List<Item>)
  • location (Area)
  • want (Item)
  • direction (Area)
  • objectives (List<Objective>)
  • currentObjective (Objective)
A Unity talk I used as inspiration for some of my Scriptable Object-driven architecture

Creating new ones was as simple as adding a CreateAsset command to the script, and I could right-click my way through the dramatis personae with ease! However, Scriptable Objects can’t be instantiated in-game – they’re more like data containers, which can help a class object or game object decide how to behave. Also, without some tinkering, their values persist between editor and runtime – if an Actor picked up an Item in-game, the Actor would start the next run with the Item in their ‘holding’ list; if an Actor’s tension reached 7 in one run, it would start at 7 on the next. The Actor, Item (etc.) classes would therefore contain the methods for each class and the variables that the game would need to reference and change, but the Scriptable Objects would hold the initial variables.

I thus renamed my Scriptable Objects into ‘Datas’ – Actor became ActorData, Item became ItemData, etc. I wrote a number of Initialise functions, which would create new class instances for each ‘Data’ Scriptable Object then add them to a series of lists, which (through string comparison) I could then use to ‘point’ references (originally Datas) towards class instances. So, InitialiseActors() creates a new instance of the Actor class for each ActorData Scriptable Object, InitialiseItems() a new instance of the Item class for each ItemData Scriptable Object, and then the game loops through each list of class objects comparing the names of connected Scriptable Objects to the names of its class objects. Phew!

I had succeeded in setting up a smooth pipeline for adding agents and objects into the game – if I wanted to increase the amount of Actors in the game, it would only take a few clicks. However, I would still need to manually add individual ObjectiveData Scriptable Objects to each, so I created a LoadData class that, using the OnValidate method (called outside of runtime, basically whenever Unity compiles and refreshes through its assets), automatically assigned ObjectiveDatas to ActorDatas based on folder hierarchy. It also accessed the core scripts of the game and updated their initialisation lists – this automation even allowed me to declare those lists as static, which sped up the coding further.

Automating Scriptable Objects Using CSVs

Even with the Scriptable Object creation process all set up, there were some concepts that would prove too time-consuming to create individually: conversation topics. These objects needed to include a lot of quite variegated data, and also needed to be organised by Actor. In my Experimental Development project I had relied on JSON deserialisation to process large amounts of dynamic variables, but for this project I wanted to use an even more accessible format: the spreadsheet. I created an Editor script that would parse a .csv file (a spreadsheet rendered into lines of comma-separated values) into the values of a TopicData Scriptable Object, given the name of an ActorData file. This method could be called in the Utilities tab of Unity, and would search for a .csv file that shared the ActorData name, then create a series of TopicData Scriptable Objects by parsing each comma-separated string.

The tutorial I used to develop my CSVToSO script

The final step in this was to automate the assignment of Topics – I didn’t want to be manually dragging and dropping all of them to each Actor – so I updated the LoadData class to populate each ActorData class with the TopicData Scriptable Objects in the folder that shared its name.

Conclusions

This was all a *lot* of work, and a big stretch of my confidence in C#. But knowing that I’ve created a smooth and modular pipeline for inserting both classes and content into Chekhov’s Gone! is not only a big step towards getting a quick-and-dirty prototype ready, but also a necessary foundational gesture towards the larger game, which will undoubtedly involve many more of these classes and certainly much more procedural content!

Final Major Project: Case Study – Dwarf Fortress

A major influence to this project both mechanically and visually, Tarn Adams’ Dwarf Fortress has been in development for almost twenty years. In this case study I will look at the three ways the game communicates dialogic and character information – through its event log, its ‘thoughts and preferences’ page, and its gameworld interface.

The Gameworld Interface

Dwarf Fortress is a top-down ‘dwarf management’ game, featuring heavy use of procedural generation and simulation. Its chief mode of interaction is via designating digging and build orders for a group of pioneering dwarves – players cannot instruct individuals to perform tasks directly, and dwarves are free to respond to their own needs and motivations (food, sleep, booze, not being eaten).

In order to facilitate the great depth of its simulation, the gameworld of Dwarf Fortress (at least without the help of a community-made tileset) is represented by ASCII symbols. This can be extremely confusing for new players, not least because – despite the simplicity of its graphics, the visual gameworld actually contains a lot of narrative and game-state information. Depth is communicated by blurred tiles; inclines by up and down-facing chevrons. Multiple items can also occupy the same tile, and the game will cycle through which sprite to display – a solid blue tile indicates that the object or entity is wet, of course! Dwarves in ‘strange moods’ will have their sprite interrupted with a flashing exclamation point. Dwarves who are wounded flash red; if two entities are in combat their sprites can occupy the same tile, which flashes between them.

If the game is unpaused, all of this information is processed as close to max frame rate as the computer can handle, and after a while begins to make a certain sense. A text readout even appraises the player of highly critical events: the changing of seasons, the arrival of visitors or packs of wild animals, mining breakthroughs, births, deaths and strange moods.

Beyond the graphical representation of the gameworld, there is also the interface – the means by which the gameworld is framed, and how the player is connected to it (Jorgenson, 2013). In Dwarf Fortress, this takes the form of a number of superimposed, highly ludic ‘frames’, almost in the manner of an early GUI for a home computer. Available commands are displayed in windows, but with the push of a few buttons even more information is made available to the players. For example, while the game is paused players are able to use the ‘k’ key to ‘look at’ the contents and status of each and every tile. This tool can be used for a quick refresher – does ‘g’ mean goblin or goose, for example? – or for in-depth examinations of a stockpile of trading goods. The ability to both watch the simulation run through the real-time graphical gameworld and pause it to uncover more is a good solve to representing systems with deep layers of information.

The Event Log

Probably the most-referenced (and memed) piece of UI in Dwarf Fortress is the Event Log, accessible through pressing L from the main screen. This records the notable events mentioned in the gameworld’s readout, but also functions as a combat log of (often hilariously) microscopic scope.

In order to view the event log, players must pause the gameworld, after which they are free to scroll through the pages of narrativised combat data that the game has recorded. While the time signature of combat as it is rendered in game is very ‘scenic’ (one action per turn, physics modelled with some degree of accuracy), scrolling through the hyper-detailed event log while the game is paused feels much more like a deep-dive ‘stretch’.

Thoughts & Preferences

The thoughts and preferences panel, accessible by examining the profile of any living entity in the gameworld, contains some of the most detailed procedural information that Dwarf Fortress is capable of generating. The citizens of the player’s fortress carry the most information, as shown below:

The thoughts & preferences panel for a single dwarf!

A single dwarf is made up of a multitude of the following variables:

  • Thoughts – thoughts give narrative shape to how a dwarf reacts to their surroundings, and events within their fortress. These thoughts, if experienced often enough, will turn into memories, which can affect the mood of a dwarf and change its beliefs!
  • Familial Status – unsurprisingly, this lists their familial relationships and objects of worship.
  • Civilisation Membership – this details, in order, which civilisations, groups and fortresses they have held membership with.
  • Age and date of birth – Dwarf Fortress simulates an entire world from its creation, so every dwarf has a birthday!
  • Physical description – some of this is flavour, but much of it is subtly gameplay-critical – more muscle and fat means more mass for fighting; fat dwarves survive longer when starved. It also displays any injuries a dwarf has sustained.
  • Physical attributes – attributes are the most traditionally RPG-like element of the Thoughts & Preferences screen: strength, agility, toughness, endurance, recuperation and disease resistance. They pertain to the core navigation, combat and fortress construction loops of Dwarf Fortress.
  • Preferences – this is a nice list – being exposed to something a dwarf likes (be that a certain type of booze or a certain colour) will give them a greater chance of having a positive thought, which of course keeps them happy. The inverse is true for things the dwarf dislikes though!
  • Mental attributes – far more variegated than physical attributes, mental attributes mostly just affect the duration and outcome of certain skills.
  • Beliefs – beliefs, along with facets, dictate the needs of dwarves, and are organised into cultural (those shared with their current civilization or group) and personal. High belief values can unlock progression in certain conversational skills, whereas low values can block that progression.
  • Goals – though it is not entirely understood how goals affect dwarves’ behaviour, that they have pretty 1-to-1 gameplay outcomes (creating a masterwork weapon, mastering a skill, having a family) suggests that they give a high chance of positive thoughts and happiness upon completion.
  • Facets – facets determine the dwarf’s propensity for experiencing certain thoughts. For example, a dwarf might have a high propensity towards romance might be more likely to enter relationships. Dwarves with high differential facets are also more likely to form grudges. Like beliefs, high or low facets can block or enable progression in certain conversational skills.
  • Needs – these can connect to Goals or Preferences, but speak to the immediate status of the dwarf more than the long-term, and heavily shape its actions.

Now, all of the above are contained within the game in the form of numbers (usually between -50 and 50 or 0 and 100), but they are also paired with a string that gets output to this thoughts and preferences panel. By obscuring the numerical information behind procedurally generated sentences, Dwarf Fortress heightens a player’s narrative involvement (Calleja, 2011) – if they wish to pick a particularly productive job for a dwarf, for example, they will literally need to get to know them and their history!

Much like the event log, game time is suspended in order for the player to parse this information. The time signature of the thoughts and preferences panel could be said to be ‘stretch’ as well – a sort of internal monologue moment that steps the player away from the real-time narrative, to dig deeper into a dwarf’s past experience and current feelings. However, the UI also uses colour – not only to differentiate between information types, but also to draw the player’s eye towards particularly new or relevant information. For instance, thoughts are rendered in white, but fade into grey the older they are. Wounds appear red; shaken beliefs are coloured brown. This gives the player a palpable sense of the connection between narrative past and present in Dwarf Fortress – Jayemanne’s diachrony (2020) – something that other games often struggle to achieve.

Conclusion

While I don’t expect to be able to even scratch the surface of DF’s procedural complexity, the layout and time signatures of its narrative information – immediate, key activity being displayed through readouts and animation, but more detailed information (about both the past and present) requiring effort on the part of the player both to change the game’s time signature and invest in parsing themselves – has given me a lot of ideas for Chekhov’s Gone!


References

Calleja, G. (2011) ‘Emotional involvement in digital games.’ International Journal of Arts and Technology. 4. pp 19 – 32.

Jayemanne, D. (2020) “Chronotypology: a Comparative Method For Analysing Game Time” in Games and Culture Vol 15, Issue 7.

Jorgenson, K. (2013) Gameworld Interfaces. Cambridge MA: MIT Press.

Final Major Project: Identifying a Direction

I spent my first week back in the studio concepting a number of responses to the design recommendations of my thesis:

Brickbreaker Dialogue Game

Focusing on stretch and spatial opposition, this opens up the traditional dialogue ‘choices’ of relationship games / visual novels like Love Island or Doki Doki Literature Club to new mechanics. Conversations are resolved a bit like a game of Brick Breaker, with individual words being ‘fired’ at the player’s side of the screen, propelled by varying amounts of energy (and in different directions?). The player has a Tetris-like ‘queue’ of words that make up their dialogue response, and how much of this queue they use up in accepting/deflecting/rejecting the NPC’s ‘bricks.’ A ‘stamina’ system is also tied to the amount of words used, and impacts the amount of words and stamina available to the player in the next round. Depending on the ratio of accepted/deflected/rejected words, the NPC’s next dialogue option changes… The interface is superimposed, ludic and emphatic.

Editing Correspondence Game

Focusing on both stretch and summary, as well as on-screen and off-screen narrative, this game casts the player as some sort of editor / correspondent, receiving and editing the manuscript pages / poetry of a writer. A non-interactive letter is shown to them, then the newest ‘draft’ becomes available to edit – by selecting different cursors (strikethrough, highlight, notate – maybe unlocking different phrases through eg. doing lots of strikethroughs in one edit) the player can make suggestions which the game will respond to procedurally, with a new letter reflecting on those changes (how to calculate attitude?) and a new draft implementing some (almost certainly not all!) of those changes. The interface is integrated, fictional and ecological.

Chekhov Dwarf Fortress Game

This game focuses on summary, mobility of characters, and emergent paths and axes. Agents with different wants / needs / knowns / attitudes / lenses / states interact emergently (Dwarf Fortress-style) on a stage set – the setting and characters are drawn from eg. Chekhov’s The Seagull, and each (timed) act is broadly set up following that play’s structure. A UI ‘box’ outputs the ‘summary’ of agent actions and conversations (X talked to Y about Z; it made her happy), maybe collecting everything into one ‘prompt script’ at the end of the act. Interaction is extremely minimal, maybe even non-existent; only the decision to ‘commit’ to a particular runthrough and advance to the next act is made. The interface is superimposed, fictional (if it is indeed a prompt script/rehearsal notes) and emphatic.

Choosing a Project

I presented these three projects to David and Maddalena, and after talking through the scope and challenges of each one, decided on pursuing the third. This was due to a number of factors: my familiarity with the theatrical process; the procedurality of the project (something I had already examined in my Experimental Development project); the low visual demand of the project, and the high level of challenge it provided.

Upon considering the timescale of the project, I decided to reduce the scope of this artefact to the first act of The Seagull – I would lose the committing mechanic, but I felt that digging further into agent behaviour and emergent narrative generally would be worth the work. By designing dialogue for turn-based ‘summary’ time I also hoped to provoke the reflective, ‘actor’-like headspace in players that I have identified in my Critical Play project and Understanding the Game Experience paper.

Final Major Project: Thesis Reflections and Conclusions

My thesis – titled Time, Space and Interface: Reframing dialogue design for better flow – explored three different frameworks from game studies academia. The first two frameworks – Wei, Bizzocchi and Calvert’s categories of time and space – are used to understand games from a narratological perspective; the third – Kristine Jorgensons’ gameworld interface theory – breaks down game interfaces into ontological frames.

Time can be categorised as:

  • Order
    • Linear
    • Non-linear
  • Speed
    • Scene
    • Summary
    • Stretch
    • Ellipsis
    • Pause
  • Frequency
    • Singular
    • Repetitive
    • Iterative

Space can be broken down into:

  • Topographical space
    • Layouts
    • Spatial oppositions
  • Operational space
    • Character/object mobility
    • Paths and axes
  • Presentational space
    • On-screen/off-screen
    • Acoustic space
    • Perspective
    • Spatial segmentation
    • Screen interface

Jorgenson’s gameworld interface theory posits three pairs of interrelational frames:

  • Integrated
  • Superimposed
  • Fictional
  • Ludic
  • Ecological
  • Emphatic

These frameworks are applied by their creators to games generally; my thesis sought to apply them specifically to the design of dialogue systems and sequences, using three case studies (Signs of the Sojourner, Oxenfree and We Should Talk) to evidence this application.

The result of these case studies was a number of design ‘recommendations’ or ‘possibilities’ – areas which I felt dialogue designers have yet to fully explore, and which might yield opportunities for innovation, or at least a break from the ‘standard dialogue meta’ also identified in my thesis.

  • Time
    • Frequency (singular) – despite the prevalence of duologues (plays between two people) in theatre, games writers have seemed reluctant to explore a singular dialogue system that might represent two people talking for an entire game. Perhaps this is a function of games designers leaning into the format’s strengths, multiple characters being cheaper to add than live performers are to hire, but a game of significant length staging a conversation between only two characters feels oddly radical.
    • Frequency (repetitive) – innovation can often be found in pushing traditional concepts to their limits, and a game which locks a player into a repeating dialogue sequence that they must learn to escape might have quite the diachronous effect.
    • Speed (stretch) – Signs of the Sojourner has shown how powerful slowing down a player’s experience of dialogue can be; using stretch to increase a player’s reaction time even further, maybe encouraging the reading of body language in animated characters or the close analysis of, as in We Should Talk, clauses or even individual words, might be a productive path. 
  • Space
    • Layout (parallel) – despite the existence of split-screen gaming, one layout barely touched upon in dialogue design is parallel. An interactive fiction version could be easily prototyped using Ink, in which, say, two conversations progress down either side of the screen and it is up to the players how long they remain ‘in control’ of each one. Alternatively, choices in one could ‘rewrite’ the other, as when films flash back to show a different perspective on events.
    • Spatial opposition – as raised in this paper, a system for the procedural varying of line length and dialogue shape could easily be developed. A ‘stamina’ system could fuel the length or intensity of a player’s lines (more stamina required to deliver longer lines), and the dialogue system could respond by matching intensity during ‘rising’ action and opposing it during conflict.
    • Spatial segmentation – though identified in this paper as a site of flow disruption, a possible solution to this might be to maximise the feeling of segmentation rather than minimise it. The first act of the game might not include a dialogue system at all – maybe it’s an exploration of a dream-like 3D environment; the second might be only a dialogue system, reflecting on the player’s experience in the first; the third could unify the two, or return to the first but inflected with information from the second.
  • Interface
    • Integrated, fictional, ecological – dialogue systems that evince these interface frames are few and far between, but perhaps, given recent advances in shader technology, it might be possible for a game’s dialogue to be delivered as a texture on a 3D environment. Gone Home (The Fullbright Company, 2013), but with the found text written on every object in the house; a player might interact by ‘writing’ their character’s inner monologue onto the surface, marking the house with their own thoughts.
    • Integrated, ludic, emphatic – a dialogue system that stages the process of editing, making each word a game piece that might be interacted with by the player/editor in some way – red strikethrough for a suggested cut, green highlighter for an enthusiastic remark. The next draft returns with certain changes made, others not, and the palimpsestic effect of these sequences builds up a relationship between the player/editor and the writer over time, although they never directly speak.

Over the next week, I will draft rough game pitches in response to some of these recommendations, and hopefully identify a final project on which to work!


References:

Echodog Games (2020) Signs of the Sojourner [Video game]. Echodog Games.

Jorgenson, K. (2013) Gameworld Interfaces. Cambridge MA: MIT Press.

Mertz, C. (2018) We Should Talk [Video game]. Mertz, C.

Night School Studio (2016). Oxenfree [Video game]. Night School Studio.

Wei, H., Bizzocchi, J. & Calvert, T. (2010) “Time and Space in Digital Game Storytelling” in International Journal of Computer Games Technology, Vol 2010.

Understanding Gaming Experience: Case Study (Signs of the Sojourner)

Here I have collated the case study work I performed on Signs of the Sojourner, using formal analysis, Calleja’s Player Involvement Model, and Costikyan’s uncertainty categories.

Hand size5 (constant)
Deck size10 (but grows over time as fatigue cards are added)
Deck update methodReplace 1 card from your deck with 1 of your partner’s deck after every encounter
Basic card interactionLay cards side by side, trying to match symbols on the touching side to reach a ‘concord’ and avoid ‘discord’Basic symbols are:Circle (empathetic, deferential, observant)Triangle (diplomatic, logical, cooperative)Diamond (industrious, creative, curious)Square (direct, forceful, stubborn)Special symbols (progression-locked or character-specific) are:Sprial (distressed or grieving)Paw (just for dogs)Cards can have differing left and right symbols, matching left and right symbols, and (rarely) pairs of matchable symbols on either side
Special card interactionsAccommodate: duplicates the symbols of the previous card (creates an accord if both left and right symbols of the previous card are the same)Accord: this prevents the next card played from creating discord, even if it doesn’t matchObserve: playing this reveals your partner’s handElaborate: copies the right-side symbol of the previous card (creates an accord if this card’s right-side symbol matches the previous card’s)Reconsider: playing this shuffles your hand back into your deck and draws five new cardsPrepare: playing this allows you to ‘choose’ your next draw from your remaining unplayed cardsClarify: can be inserted between cards, rather than just on the end of the rowBacktrack:Fatigue: cannot match with anything (picked up over long journeys)
Turn cycles/roundmin 1 (instant failure, highly unlikely); max 5
Rounds/encounterBetween 2 and 4 rounds per encounter, depending on the amount of required ‘concords’
End of turn actionsPlayer and partner draw a new card from their respective decks
End of round actionsReshuffle hand into deck; draw a new hand
Round win conditionMatch 5 cards in a row = concord
Round loss conditionUnable to play a matching card = discord
Encounter win conditionAchieve X successes before achieving Y failures
Encounter loss conditionAchieve Y (usually 3) failures before achieving X successes
InformationGenerally imperfect (partner’s hand is hidden), but playing the ‘observe’ card reveals your partner’s hand. Also, symbols before the encounter telegraph which symbols your partner’s deck will contain.
Average encounter length2 minutes
Player/partner narrative relationshipCo-operative
Narrative objectiveStated by your partner at the beginning of the encounter; achieving concordance will mean you both agree to pursue this action; discordance represents disagreeing or failing to decide on further action. Early encounters are centred around agreeing on trade deals; later encounters are more emotionally driven
Narrative super-objectiveSave your late mother’s bodega
Dialogue triggersAfter each round (responsive to success or failure to match 5); after X successes (felicitous outcome); after Y failures (infelicitous outcome)

Player Involvement Model

Player Involvement Model

  • Spatial – little to none in-encounter, although the scene setup gives the impression of a ‘real-life’ card game, with *you* facing your partner. This focuses attention on their animated reactions, allowing for greater affective involvement. A slight sense of exploration is achieved at the game-wide level, which increases narrative involvement.
  • Kinaesthetic – card selection and placement is smooth and pleasurable, with animated feedback for matching runs; controls are extremely accessible, with (on average) fewer than 2 physical actions needed to complete a turn.
  • Ludic – actions have a clear impact on the game state, often followed by changes in sound design and partner animation, furthering affective involvement; progression towards the ludic/narrative goal is made with each action, and clarified through UI ‘pips’ representing ‘concord’ and ‘discord’.
  • Shared – generally cooperative involvement with other game agents, specifically in a social, conversational setting, leads to a heightened sense of shared involvement.
  • Narrative – most elements are directed towards narrative involvement; encounters are one-time-only, number of steps on a journey is limited, as are the number of journeys, leading to a concrete sense of dramatic progression (5 ‘act’ structure); each encounter begins and ends with narrative information (personal to your partner and often relevant to your character biography), and relationship commentary is provided mid-encounter depending on player performance. Characters recur, and remember past encounters.
  • Affective – lots of game elements work towards affective involvement: clear communication of dramatic stakes, partner’s animated reactions, multi-stage narrative progression, responsive musical score; even deck-building involves emotional memory, as you replace one card with one of your partner’s after every encounter regardless of outcome; affective involvement is directly encouraged by the game description (a game about ‘making connections and building relationships[…] Your deck is a representation of you and how you communicate, and the goal of an interaction isn’t to ‘beat’ another character, but to be able to match cards with them to build a connection and communicate to each other.’)

Uncertainties

  • Performative uncertainty – avoided for the most part; actions do what they are expected to do 99% of the time.
  • Solver’s uncertainty – medium; player actions are limited to five choices, each with a binary result, but by thinking ahead and gaining information about your partner’s hand you can plan more creative ‘solves’ to the matching problems; deck-building itself is a puzzle-like endeavour, though limited to replacing one card at a time.
  • Player uncertainty – rules are simple (match symbols left to right) with special cards heavily tutorialised; failure does not halt progression, is built into the theme and narrative, and is unavoidable in many cases due to limited card options (life goes on).
  • Randomness – satisfying, dynamic implementation (you will draw a card in your first hand 50% of the time, and will likely see >75% of your deck in a round – until you start accumulating fatigue cards, which can change the shuffling distribution significantly).
  • Analytic complexity – relatively low when taken encounter by encounter (4 main symbols to match; deck limited to 10 chosen cards); medium complex when understood in a game-wide context (5th symbol introduced later; fatigue cards build up and symbol popularity diversifies, leading to more difficult-to-predict encounters).
  • Hidden information – used sparingly but effectively; without an ‘observe’ card, your partner’s 5-card hand is always hidden, but you are made aware of the symbols in their deck before beginning an encounter.
  • Narrative anticipation – the aforementioned time structure helps build narrative anticipation, as does the fact that reaching an accord depends entirely on player performance (though partners sometimes have special cards that help you out if you’re struggling – a nice surprise!). The writing also supports narrative uncertainty: by defining the emotional state of your partner and the thing they want to agree upon with you, but not the interstitial dialogue, the dramatic subtext of the conversation is abstracted into card mechanics, and will not always proceed as expected!

Understanding Gaming Experience: Case Study (Griftlands)

Here I have collated the case study work I performed on Griftlands, using formal analysis, Calleja’s Player Involvement Model, and Costikyan’s uncertainty categories.

Hand size5 (starting size; cards can increase hand size in the following turn)
Deck size10 (starting size; can grow or shrink as players add or remove cards)
Deck update methodNew cards are awarded through successful encounters, as is the ability to upgrade valuable cards once or remove unwanted cards from your deck; cards gain ‘experience’ through being played, and can be upgraded this way once
Basic card interactionUse action points to play cards, with the aim of reducing your opponent’s Resolve to 0 while keeping your Resolve above 0. Resolve is stored in the ‘core argument’, representing your narrative intention in the encounter.Cards can cause two different types of damage (Hostility / Diplomacy), each of which can be multiplied by playing additional ‘arguments’, which each have their own resolveManipulation cards can add ‘composure’ (defence) to your core and additional ‘arguments’
Special card interactionsArguably too many to list, but here are some:Draw: Add a card from your draw pile to your hand.Replenish: When drawn, this card draws another card immediately.Improvise: Generate a set of random cards for the player to choose 1 of to add to their hand.Discard: Discarding a card adds it to your discard pile and allows it to be shuffled and redrawn once your deck runs out of cards.Expend: When played this card is removed from your deck until the end of battle.Destroy: When played this card is permanently removed from your deck.Incept: Create an argument/effect on your opponent’s field.Evoke: Play this card automatically once the condition is met from your hand (or deck if drawn mid turn).
Turn cycles/roundN/A
Rounds/encounterTypically 5-15 turn cycles / encounter
End of turn actionsUnused cards are discarded; player draws 5 new cards
End of round actionsNot an end of round action, but when you run out of cards to draw, your deck gets refreshed with your discard pile
Round win conditionN/A
Round loss conditionN/A
Encounter win conditionReduce opponent’s Resolve to 0
Encounter loss conditionConcede, or have your Resolve reduced to 0
InformationGenerally perfect (opponents telegraph the arguments they are targeting, as well as the damage their arguments will do – this is called ‘intent’); some opponent actions can obscure their intents
Average encounter length5 minutes
Player/partner narrative relationshipConfrontational
Narrative objectiveEither avoiding a threatened physical confrontation or convincing your opponent to do something they don’t want to do – eject someone from a bar, sell something for a bargain price, or aid the player in an upcoming confrontation
Narrative super-objectiveCampaign-dependent (there are 3 campaigns, each with a different protagonist); ‘revenge yourself on your enslaver’ is the initial campaign character’s super-objective
Dialogue triggersProcedural barks after every 1-2 cards; dialogue ‘scene’ after victory, concession or loss

Player Involvement Model

  • Spatial – encounters are presented in the third person, with an animated player character ‘in conversation’ with your opponent; observing your own character reacting increases narrative involvement, but perhaps lessens the sense of incorporation. A slight sense of exploration is achieved at the game-wide level, which increases narrative involvement.
  • Kinaesthetic – controls are compelling and accessible, with fewer than 3 clicks on average required to affect most actions; feedback is stylish, immediate and involving.
  • Ludic – most elements are directed towards ludic involvement. Actions have a clear impact on the game state, with numerical consequences previewed before action is taken. Progress towards the ludic/narrative goal, represented by the integer state of player and opponent resolve, is clear, if quite ‘gamey.’
  • Shared – negotiation encounters are locked to quest-specific NPCs, so the sense of social interaction with agents in the world feels limited; negotiations are competitive-only affairs.
  • Narrative – narrative progression bookends each encounter, with choice-less dialogue ‘scenes’ playing out depending on a victory or a loss; though player and opponent are expressively designed and animated, the integer representation of ‘resolve’ as the only indicator of in-encounter narrative progression leaves little room for narrative nuance within negotiations. Characters recur, and have ‘relationship statuses’ towards the player.
  • Affective – despite the explicitly emotive naming conventions of cards (describing various conversational ‘moves’), encounters mostly involve the player at the affective levels of tension and suspense (will I win or lose?).

Uncertainties

  • Performative uncertainty – fairly low; misclicking is possible but unlikely, actions do what they are expected to do 99% of the time.
  • Player uncertainty – medium; new cards with unique mechanics constantly introduced, opponents have access to unique, often unpredictable ‘arguments’ and abilities; resource-based gameplay makes indecision more likely. Failure is a significant barrier event, rendering the player unable to negotiate until they have refilled their ‘resolve’.
  • Solver’s uncertainty – low in-encounter (numbers go up, numbers go down); high when considering deck-building possibilities (winning, buying and removing cards, no upper deck limit).
  • Randomness – highly dependent on deck size, but the length of matches means previously played cards are reshuffled into a new deck, increasing randomness exponentially over time.
  • Analytic complexity – relatively high; players need to balance tactical considerations like action management, offense, defense, optional floating resources (‘influence’) and turn-sensitive opponent actions (abilities that increase in power over time, or trigger after X turns); the number of game-available cards and opponent abilities is almost ungrokkably high, such that a wiki or guide is handy for optimal deck-building.
  • Hidden information – low; players usually have perfect information about an opponent’s ‘intent’ for the next turn.
  • Narrative anticipation – often lost mid-encounter; dialogue is only meaningfully progressed post-encounter, in predetermined dialogue ‘scenes’; in-encounter procedural barks quickly become repetitive.

Understanding Gaming Experience: Research Log 2

The following is a collection of draft paragraphs, supplemented by research, introducing some of the key concepts of my paper. While useful in the initial drafting of the work, this part of the literature review was ultimately deemed too wide-reaching for the scope, and I was able to condense it into single sentences or references. I suspect, however, that these paragraphs will be of use when I come to writing my thesis, as I expect to go into more academic detail on some of the concepts detailed below.

NARRATIVE GAMES

Taking Janet Murray’s definition from Hamlet on the Holodeck, narrative games – or interactive narratives – are games that absorb players into their story through a mixture of agency, immersion and transformation. The player (usually) has agency to change the story, or react to it; the story world reacts in a predictable way, or follows a narrative logic; the player (or the player character) is transformed through their exposure to the story, or they see the world transformed (Murray, 1997). This story-engagement has been separately defined by Gordon Calleja as ‘narrative involvement’; rather than deriving from only a game’s written script, any and every game element can inform and service the player’s narrative involvement (Calleja, 2011).

CONTEMPORARY NARRATIVE GAMES

Contemporary narrative games have sought to address the aforementioned stagnation of dialogue design. Some, like Disco Elysium, have opted for deep character exploration and expressive interaction. Others have implemented timed and time-specific responses to create a sense of theatrical liveness and narrative urgency (Oxenfree, The Walking Dead). One sub-genre, now in the ascendant, is the narrative card game, which uses the uncertainty of card-drawing and -playing mechanics to increase both replayability and emergent storytelling (Costikyan, 2013). Successful examples have included Reigns and its sequels, as well as Where The Water Tastes Like Wine and Cultist Simulator, but none of these have married their game or story mechanics with their dialogue design in particularly novel or theatrical ways.

PLAYER ROLE

The concept of a ‘player role’ has been much-discussed in games academia, from Juul (2007) and Schell (2008) to Mateas (2004) and Fernandez-Vara (2009). Most pertinant to this essay, however, is Gonzalo Frasca’s assertion (using theatre practitioner Augusto Boal’s term) of the player as a ‘spectactor’ – simultaneously a spectator and an actor (Frasca, 2004). Richard Schechner, in his seminal text Performance Studies, provides the following hierarchy of performative experience:

The drama is the domain of the author, the composer, scenarist, shaman; the script is the domain of the teacher, guru, master; the theater is the domain of the performers; the performance is the domain of the audience.

Schechner, 2006

As if prefiguring the challenge of fitting games into this model, he also states that ‘in many situations, the author is also both guru and performer; in some situations the performer is also the audience.’ Frasca’s theory of player as ‘spectactor’ lines up neatly with Schechner, but clashes forcefully with Mateas’s assertion that ‘the player should not have the feeling of playing a role, of actively having to think about how the character they are playing would react’ (Mateas, 2004). Mateas’s assumption here – of the player being capable of performing as a character as instantly and naturally as they would live their own life, even with the aid of cunning game design affordances – fails to take into account the practicalities of acting work as a necessary step towards presenting dramatic subtext to an audience, and thus for a ‘spectactor’ to experience it. Without consciously integrating, to some degree, the work of an actor into their role, subtext will always be distant from a player.