Skip to main content
bad point well made

On defaults and memory

With careful prompting, you can lead an AI model to generate content that isn’t “slop”. The problem arises from defaults.

An LLM isn’t stateful between sessions, it has no global perspective of its usage. Each time we use it, it is reborn fresh. It’s an amnesiac, it’s asking us our name again every time we meet it.

Is an amnesiac particularly well suited to creating art? Given a similar introduction, a similar request, nothing is stopping the amnesiac from gleefully churning out the same type of art again and again. It has no memory of what it has produced before, so it will continue to default to its median style unless it is told otherwise.

If you were taken back to a certain point in time repeatedly (with your subsequent memory erased) and asked to draw a picture of something, it’s likely the distribution of pictures you’d draw would be much closer to each other than if you were asked to draw the same picture repeatedly with memory of what you had drawn before.

It seems that AI is not going to be a human-level creative until it has genuine memory/statefulness and can respond to external stimuli beyond the confines of its sandboxed chat. After all, our taste comes from some chaotic mixture of our past experiences, and what is that but memory?

Now we have a contradiction. It seems that for art generation, we want our models to have some kind of memory, yet with memory comes inconsistency and unrepeatability - two runs with the same instructions are necessarily going to produce a wider distribution of outputs. Products are generally optimised for universal and consistent behaviour. This is necessary if we want to run more rational and predictable computational tasks. It could be that to make the machines more creative, we need to make them less predictable, less rational, less reliable... Altogether more flawed and human.