top of page

Leveraging the power of Chat GPT for automated code conversion

  • Writer: Gareth Bloemeke
    Gareth Bloemeke
  • Aug 1, 2023
  • 5 min read

Updated: Aug 31, 2023

Back in early 2022, I made a mod for the video game Inscryption called–fittingly– Gareth’s Mod. It was small but very focused. The aim was to add booster pack cards matching the original style of the game–with the intent of not only giving players a reason to replay, but to explore new concepts within the game.

A screenshot from the video game Inscryption, showcasing a mod which adds new cards
A screenshot from my Inscryption mod

The mod was written using Harmony, a runtime monkey patcher for Unity games that allows adding content. It was one of my first longer term projects, lasting a few months and through 10 major versions. It did remarkably well; as of August 2023 we are sitting at around 11,000 downloads, which makes it one of the top ten largest mods.


As with any modding experience, there are growing pains and learning curves. For me this was around my former definition of “source control”; I was keeping a zip file of every major release. As a result, I learned quite a bit about interfacing with obfuscated code and trying to adhere to engine standards through this project.


One of the more important lessons this project taught me is that software maintenance is an ongoing and never-ending battle. This lesson hit especially hard when I decided to check up on my mod and discovered it had been deprecated. In the year since I had released it, the API had gone from version 1.8.1 -> 2.15.1. As a result my mod was almost entirely incompatible with the new API.


Out of a desire to make sure long-time fans could continue to enjoy the experience I’d crafted, I decided to dust off the .sln project and get to solving problems.



Converting Card Methods


The first challenge to overcome was updating the method I used for registering cards with the API. Previously it had been done by calling an exceptionally large function with a set of properties, like below.



Thankfully now, both for my sanity and the readability of the document many previously internal properties had been abstracted away. The tools were improved, and cards were now defined by method chaining. This was far less garish to interface with:


The only problem was I had 30 cards. Converting them manually would be incredibly arduous, and would likely be error prone. So I started mulling over other solutions–knowing that odds are automating this might actually be faster than manually updating it. On a whim I decided to mention the dilemma to chatGPT. I sent it the two Hyena code snippets, before and after, and politely requested it convert any further strings I sent.


It agreed, and so I tasked it with converting another card, the elephant. I fed it the below input



And I was greeted with this:


It came out nearly perfectly, though admittedly a bit overzealously commented. It reminded me of some of the early work I’d done when I was first trying to develop a flow for leaving comments. This nearly pristine output was generated in under a second, and I was starting to get the feeling that I was about to get hours of my time back.


There were some snags, however. For one, chatGPT had assumed that because the Hyena had had a tribe (Tribe.Dog) that the elephant needed to have one as well, and erroneously assumed CardTemple.Nature (a classifier for the group of cards) could be cast to a tribe, leading to the line AddTribes(Tribe.Nature).


However, alongside snags like these, there were some moments of impressive innovation. Despite the fact that the original elephant code was lacking a definition and assignment of a pixel portrait, ChatGPT inferred this: because it was present on the first one it would need to be on the second one, a correct assessment. It had also automatically removed the texture strings from the enclosing generateTex() method they were called in in an earlier iteration. It had also perfectly preserved the formatting.


This was a huge revelation, and I learned quickly I could feed it 3-4 at a time and it would


generate them all in a few seconds. Pasting them in, double checking, and fixing errors or omitted code led to me being able to very quickly get through a bulk of them. In addition, whenever GPT made a mistake, it could also be instructed regarding how to avoid it in the future. For example:


Me: Just as a heads up CardTemple.Nature cannot be cast to Tribe.Nature so no need to do that. Some cards just don't have tribes. Also we don’t need to define properties in the introductory block. Try again with this in mind. Also no need to have per line comments, thanks.


With this in mind it returned this:


Which was perfect, and it stuck me for how powerful this was for doing routine tasks like this. As one final change I had it remove the “Garethmod” prefix from the internal ID as this was now assigned automatically, making most of my card’s IDs Garethsmod_Garethsmod_Card. GPT was able to comply with every one of my requests,


and the limited temporal memory was enough that it was able to view the formatting of the previous one when creating the next one.


Based on character limits–I’m on the free tier of ChatGPT–I was able to feed it four cards at a time. Given that the result was a quick spot check away from usage, I was able to save a ton of time. Further, any mistakes it made I could train GPT to avoid in the future. Additionally, for one-offs I could just correct them. However, I was routinely surprised at how often GPT could infer novel concepts based on how intuitively functions in the API were named. For example, one of my cards has an “alternate portrait” which appears when a specific action occurs. GPT correctly inferred the method for initializing this was SetAlternativePortrait(). It succeeded similarly later on with the SetEvolve() method though the formatting was off.


Converting Sigils


Up next were the sigils. Sigils in Inscryption are abilities that cards have, denoted by icons on the card that can be viewed in the rulebook. Thankfully the underlying logic of my sigils didn’t need to be changed. However what did need to be altered was their initial formatting. Another simple transformation. This time from this:


To this:


Which it did rather well from this input:


Converting it into:


The only mistake in this was it erroneously returned an ability when it didn’t need to. However, considering it was missing the context of the method, this can be excused. I also taught GPT to avoid this pitfall.


With that out of the way it was only a matter of converting the rest of the property blocks for sigils into the new format, and then I could begin on making sure everything was functioning as intended by running through some tests.


Conclusions


Overall Chat GPT saved me a ton of time and effort on what would have otherwise been an incredibly laborious and inefficient process. It proved adept at handling the job of reformatting blocks of code in a proper style and without loss of information. On the highest end it even occasionally inferred additions based on my formatting and the context given from the methods.


Despite this, it was not perfect. It frequently needed guidance when it inferred wrong or made weird or overly verbose formatting decisions. That aside, this easily saved me a few hours of monotonous, error prone work and in that it excelled as a tool in my workflow.



Special thanks to this guide which made the syntax highlighted code in this blog possible.



Comments


bottom of page