A generic list manager for roomscript data decoupling

Fellow Critters,

I’ve created a generic list manager script that allows you to decouple list data from your room scripts. This addresses two key pain points:

  1. Script size limits - Helps keep your scripts under the 20k character limit by moving descriptions, events, content data out of the code.
  2. Reusability - Separates room-specific content (events, puzzle solutions, descriptions) from script logic so you can deploy the same script everywhere.

How it works:

First, I deploy the list manager script in a room, which gives me commands to create and populate lists in that rooms persisted state:

list new maze_descriptions
list push maze_descriptions "You find yourself in a twisting corridor of stone..."
list push maze_descriptions "The passage opens into a small chamber..."
list push maze_descriptions "Strange markings cover the walls here..."

list new maze_solution  
list push maze_solution "n"
list push maze_solution "e" 
list push maze_solution "w"

lists
Available lists:
maze_descriptions (3 items, 139 chars)
maze_solution (3 items, 9 chars)

Total: 2 lists, 6 items, 148 characters

In my actual room scripts, I can then use minimal helper code to consume the data:

// Minimal helper (6 lines)
const LIST_PREFIX = "list_";
function getStoredList(name: string): string[] {
      const data = Store.getString(LIST_PREFIX + name);
      if (data == null || data == "") return [];
      return JSON.parse<string[]>(data);
}
// Usage examples
const description = getRandomFromList("maze_descriptions") || "Default description";
const solution = getStoredList("maze_solution");

Benefits:

  • Scripts become pure logic, content lives externally
  • Easy to update descriptions/content without redeploying scripts
  • Could enable script sharing with standardized data interfaces
  • Manages storage efficiently with JSON serialization

Is anyone else doing this kind of approach? Any issues I should be aware of?

It seems like standardizing storage of simple data structures could really help with script reusability across different rooms/realms so perhaps this kind of functionality should eventually come ‘batteries included’.

If there’s interest, I’m happy to share the full list manager code or give it to the community to include in the mucklet scripts repository.

Heyho,

Fuzzer.

Well, this is embarrassing. Turns out each script has its own isolated Store, so the list manager script and maze script can’t share data - they’re accessing completely different storage spaces. This means the idea of having a separate list manager script provide data to other scripts simply doesn’t work. What a pity. Should have tested that before posting!

My apologies for jumping the gun on this one.

Heyho,

Fuzzer

1 Like

The limitation is clearly stated in lib/host.ts:

“Each script instance has its own store, and does not share this data with other scripts.”

Well, piffle.

1 Like

Perhaps you can request data from the store script via a message? That seems like the way it’s intended to work, albeit a bit roundabout.

Following up on the list manager discussion - I’ve explored the messaging approach @GreenReaper suggested.

AFAICT, scripts can only communicate via async messages (Script.post/onMessage). Since AssemblyScript lacks async/await, callbacks, or any blocking mechanisms, you can’t simply ‘get’ data from another script. This means:

Instead of:

  const items = getListFromService("descriptions");  // nuh-uh

You need something like:

Script.listen([LIST_SERVICE]);
Script.post(LIST_SERVICE, "get", "descriptions");

// ... later in onMessage ...
if (topic == "descriptions") {
    const items = JSON.parse<string[]>(data);
    // MUST use items HERE immediately
}

Since there is currently no reuse (eg import/include) every script using the service must duplicate this and probably be refactored to be able to use event-driven logic.

IMO the juice isn’t worth the squeeze. We’ll need platform updates to get to a model where we can decouple room data/configuration from script logic. And if platform updates are involved, then maybe it will come in the form of first-class support for room/owner namespaced list and/or dictionary persistence and no need for script workarounds.

Heyho,
Fuzzer

1 Like

Ah— but the squeeze’s the thing, in which we’ll catch the conscience of the… Kzin?

. . . I forget where I was going with that. :sweat_smile:

(My follow-up idea of passing a ref_func to hand back like it was some kind of u32 webhook seems like an equally bad approach, if it even worked - as far as I know the scripts are separately sandboxed so it wouldn’t help with the need for further code on the ‘client’ end.)

I have a lot of context doing this for @Shinyuu lately so let me tell you some things :smiley:

The way the WASM VM works you can’t rebuild the stack, meaning any async call or “coroutine” have to be inherently stackless. because mucklet does not guarantee the script will be alive when you call it back, all the context must be passed down through whatever IPC you do, and it’s a world of pain.

I had a few asks from @Accipiter lately on that front, and I think there are ways to improve this. As it stands right now, you can do IPC, it’s just not practical.

I build all the scripts in a monorepo (meaning they share bits of code), and I don’t use AssemblyScript, instead writing them in Rust. This allows me to control the flow on a much lower level and effectively imitate the “async” behavior by passing the context through the calls. It’s not enjoyable and it’s not trivial code, but it works.

The easy way to make it work are the following concepts.

Have a router script

Its ID will be hardcoded in every other script. When you want to call the other script, you call them through the router and tell the actual destination by name, e.g. you pass it { "target_room": "xxxyyzz", "target_script": "inventory", "data": { action: "store_room_item", ... }}. You really want to have one message structure across all of your scripts to simplify de-/serializing.

Every other script must “check in” with the router on activation (i.e. send a message “hello, I’m the inventory script”). Then the router can keep a DNS-like table of “for room abc inventory, you pass it to script id xxyyzz”.

Return plain actions

The return path can technically be “fancy”, after all an async function is just a state machine and you can serialize that state machine and pass it along. Things will fuck up spectacularly on updates, of course. So I figured it’s not practical to do significant code offloading and I just have a “god script” that does all the business logic as pluggable modules and also keeps most of the state in its own store. The room scripts are effectively the command proxies (poor-man’s global scripting), and as such they only need a reply that tells if you want to describe/private describe/info or otherwise output anything (i.e. the last bit). Structuring your code in the same way helps a bunch.

it’s not an easy approach and there’s lots and LOTS of duplication, but it, functionally, does the job. I have several megabytes of code deployed over mucklet this way and it’s pretty stable. You’d kind of want to work with a builder if you go that way because deploying it per room owner is just a PITA when you have an update you need to deliver to 50+ rooms over 10+ characters.

1 Like

Heyho!

Thanks for laying this out! These details are super useful and clearly hard-won.

I follow the approach and the trade-offs. The router/lookup table and centralized orchestrator pattern makes total sense given the constraints. I really appreciate you sharing what actually works in production.

Is there any public roadmap or design-goals doc for mucklet scripting? I’m trying to learn whether the async/lifecycle and current deployment constraints are intentional or just where things are today.

I think I’ll keep my own projects on the simple side while I see where it’s headed. Thanks again for sharing what’s worked in the wild.

1 Like

Not an accipiter problem, more of a wasm problem: async proposal, async executor in rust.Those are very specific, though, they expect the runtime that is closer to the browser than what mucklet offers - it can drop your execution state at any moment. I wouldn’t hold your breath if I were you to see async work, because, in all fairness, the erlang/BEAM vm’s approach to processes and message passing fits mucklet infrastructure much better. At its core, it’s expected that a process might just die, so no extra state planned. It’s synchronous by its nature, too. If you consider all mucklet scripts as erlang processes, then you’ll see a bunch of similarities. I’d even wager one more step and say that if the runtime cycle was more akin to GenServer model (state comes in and is returned), then it’d be easy to apply the model without thinking of how to preserve the whole script memory footprint.

There are a few caveats to this. While BEAM is designed for a huge number of processes (like mucklet scripts), it’s inherently tail-recursive, i.e. it’s easy to explain the loop in terms of the erlang. You just call the function itself and that does not create a stack frame. You can’t easily do that in AssemblyScript that’s the reference script implementation, nor in e.g. rust (that I use). The other one is that GenServer has a neat concept of hibernation that allows you to reduce the memory footprint of a process significantly while it doesn’t do anything.

Should @Accipiter drop wasm and rewrite everything in erlang then? lol no. Can we learn from the industry? probably.

I’ve been tossing a bunch of ideas on the pile of what needs to be done but I’m not sure where the current priorities are. I think that having real IPC would be great but I am very aware that it’s not a weekend project.

As far as I’m concerned, the only limits I’ve been hitting lately are memory/cycles. God scripts use up a lot and there’s a lot going on to implement extra senses, inventory, trade, and NPC dialogue (groan) in a single piece of code. Other things are workable around.

At this point I’d say a huge benefit to scripting is global scripts with extra resources and capabilities, but I don’t think that’s exactly a wolfery use case, it’s more of a Shinyuu use case. A huge-ass global script removes the need of IPC by the sheer power of being omnipresent and omnipotent.

Wolfery? IDK. script DNS would be good. IPC would be good. better UI capabilities other than info/error would be good. That’s my take ofc.