Categories
NaNoGenMo

Thoughts on NaNoGenMo 2024

I spent about 25 hours in November producing a novel via an LLM for NaNoGenMo 2024. It was an interesting experiment, although the book produced was not particularly engaging. There’s a flatness to LLM-generated prose which I didn’t overcome, despite the potential of the oral history format. I do think that generated novels can be compelling, even moving, so I will have another try next year.

Some things I learned from this:

  • I hadn’t realised how long and detailed prompts can be. My initial ones did not make full use of the context. Using gpt-4o-mini was cheap enough that I could essentially pass it prompts containing much of the work produced so far.
  • For drafting prompts, the ChatGPT web interface was more effective, because it maintains the full conversation as a state. Once I used this for experimenting with prompts, things moved much faster.
  • Evaluating the output is incredibly hard here. In a matter of minutes I can create a text that takes hours to read. Most of my reviews were done by random sampling, and I didn’t have time to properly examine the text’s wider structure.
  • It was also tricky to get consistent layouts from the LLM. Using JSON formats helped somewhat here, but at the cost of reducing the size of LLM responses.

22 books were completed this year and I’m looking forward to reviewing them. I have an idea for a different approach next year and will do some research in the meantime (starting with Lillian-Yvonne Bertram and Nick Monfort’s Output Anthology)

Leave a Reply

Your email address will not be published. Required fields are marked *