Solving the mobile connection issue

So as many are aware in one form or another, staying connected on mobile is a bit of an issue. Due to limitations imposed by both mobile ecosystems, websocket connections (which are used extensively by Mucket) are closed not long after the site (or the app version being worked on by @farcaller) is put in the background. This doesn’t bode well for the IRC style of connection Mucket and Wolfery provides.

My thoughts are this: what if we developed a relay server to act as a virtual client for the user.

To put it in as simple of terms as I can:

  • launch a program (preferably a docker container, but could also be a desktop app) that connects to Wolfery (or any future Mucket) as a client, the same way that going to the website would.
  • This application then broadcasts itself as a server for the logged in user’s mobile app(s) to connect to (encrypted of course) and relays any messages sent or received between the mobile app and the site proper.
  • This server could either store messages waiting to be received by the app and send them when the user re-focuses it, or push them as notifications.

The reason I think a relay server may be the best options is as follows:

  • Many of these features are completely useless for a non-mobile user, aside from being able to stay logged in I guess, and maybe keeping all your logs on the server instead of scattered across any clients they may have.
  • Building these features into Mucklet itself could require major rewrites of the system, and incur higher costs to operate, neither of which I wouldn’t want to push onto bosswolf.
  • Not every user (probably) is going to need this extended system, so offloading the burden onto those that do (or some form of payed broker) will help keep the free parts of Mucklets free.

The reason I’m now talking about money is that, unless each user that wants to do this has their own server they can run all the necessary parts under, including a way to transmit mobile push notifications, there’s probably going to be some cost to spinning up one of these clients. It could be a potential revenue stream for Mucklet to offer as a paid add on (maybe as a benefit of a potential Patreon or similar?).

But under the same reason I talk about financials, I would also want this to be open source, so that for those that do have the knowhow, capability, or for folks that just want to mess with it, they could opt into self-hosting said relay.

Oh, and as much as I love @farcaller’s work on a mobile app, this would probably have to stand along side it instead of working together, unless that app is built with this kind of service in mind, as a relay server would work as the resclient, instead of the app itself, something that that app already does.

Sorry if this is kind of rambley, kind of a stream on conscious post of all the ideas I currently have. Does anyone have any experience with something like this? I’m thinking I might do some tinkering with something in .NET, if I can figure out the resclient for it, seeing as it’s what I have the most experience in and have done work with server-app API infrastructure and AWS notification services through it. But I’m open to suggestions and assistance if anyone has anything to add to the topic.

1 Like

I’ve done some work in this domain; I built a functioning prototype of a cloud-based client for traditional telnet-based MU* servers. I’m interested in helping out with some of the architecture and perhaps some code.

The big, interesting problem here is that this is somewhat changing the nature of the communication. Mucklet handles messages1 in a passthrough manner: as soon as it gets a message, it sends it to everyone who is supposed to receive that message, and then it forgets the message because it’s done with it. This is making a pair of assumptions: that the channel for passing the message along is very robust, and that once the message has reached its destination it’s not going to need to be re-sent.

In the mobile use-case, that first assumption is plainly invalid - we can’t rely on the channel being robust, because that’s exactly the problem we’re trying to solve; the channel may be up or down at any time, and we cannot be promised we can control that, or even be notified when it happens.

I think the second assumption is also shaky, because I think the primary use-case for mobile access for many users is going to be to transition between mobile and traditional desktop clients as they go about their day. In that scenario, we really want all clients to have the message in their history before the server lets go of it.2

In the other direction, there’s also the problem of idempotence – what happens when the same thing is done multiple times? If I’m on a flaky mobile network connection and I send say Hi! to the server, what happens if my client doesn’t hear back from the server to acknowledge that the message went through? My client could automatically retry that – but if the message did go through, everyone I’m talking to is now seeing my message two, or three, or more times.

I’m of the opinion that these problems are best handled in one architectural swoop, by changing the mode of communication between the client and the proxy server. Instead of a client that maintains history and a proxy server that passes messages through, I think there’s a cleaner solution if the proxy server maintains history, and the client can subscribe to new messages in that history, and request past sections of it as needed.

There are some downsides to this, of course: the most obvious one is that a server-side history is likely to be on a device that isn’t physically controlled by its end-user, which has privacy implications. We can balance those by deciding how much history is kept (e.g., the proxy only keeps a day’s history, or a week’s. That’s straightforward to let the user control, too.) But as an opt-in solution, this achieves a clean experience for multi-device users, is not appreciably worse for privacy than most other systems out there, and is architecturally pretty straightforward.


1: I’m leaving aside a lot of non-message things like interacting with the world here, because the model is more interesting than “pure” passthrough, but enough of it is passthrough that we need to handle that part, and if we’ve handled that part then extending that solution to the rest is a fairly straightforward flavor of problem.

2: “All clients” is also an open-ended set, since I can easily open another. If I sit down at a new computer – say, at a library – and open my web-client, would it be reasonable for my history to be accessible to me? Keeping history client-side assumes that most users have consistent devices which they use for access, or at least that they must have such a device if they want to be able to review their own history.

1 Like

Thank you for chiming in, and if this gets off the ground I’d love your help and feedback!

I share the same two assumptions you have, which is why I believe a proxy server to be the best route for mobile.

The way that Mucklet handles multiple clients I believe will work fine for the purposes of adding a proxy server connection on top of the existing architecture. Accipiter has built it so that messages are shared between all connected clients for a generous period in case of lack of overlap. I actually use it quite a bit to transition between desktop, laptop, and mobile now. So as long as the server sees the proxy client the same as it does another webclient (as opposed to an bot client) I don’t think we need to worry too much about the server end of that, as long as the proxy is running.

I’m thinking that the messages are each relayed with something like a uuid or a checksum or something, so that, if the app client doesn’t receive confirmation from the proxy, it can safely resend the message until it gets one, and the proxy can throw out any duplicates it may receive. Though I personally haven’t made anything that robust, so it would definitely be an architectural hurdle to cross when we get there.

This could also mean that if a user chose to, they could use the proxy server as a ‘cloud backup’ of their logs, connecting to the same proxy from multiple devices and not have to worry about logs or sessions or anything like that.

Yes that is true. Which is another reason why I think that the option to roll your own instance of the proxy is essential, whomever runs the ‘out of the box’ solution will probably have to work some sort of privacy policy out with Accipiter, if not controlled by him in the first place. It’s also why I brought up the topic of encryption, if the logs are stored encrypted on the proxy, then things should be more secure than just sending them to the client app willy nilly.

1 Like

What you’re talking about sounds a whole lot like a ZNC for mucklet. I don’t think it makes much sense, though.

The way mucklet currently works you get a replay buffer of 24 hours as long as you’re logged in. If you have a computer to run the proxy you have a computer to keep the wolfery open in your browser and it’s significantly easier for the end user. If you want to have uninterrupted mobile chat the only thing you need to do is keep the browser tab open and that’s it, the mobile app will get the replay buffer for one day and it’s a long enough span to be reasonable.

Having the bouncer like proxies ruins the immersion. It’s one thing to see empty rooms, it’s another thing to see rooms full of idling people.

Now, there’s a very valid point of messages not having a guaranteed delivery. That’s something that’s way better addressed on the protocol side, though.

Sorry if this is kind of rambley, kind of a stream on conscious post of all the ideas I currently have. Does anyone have any experience with something like this?

I already made one out of sticks and stones for the mobile app testing :slight_smile: Can polish it a bit more and publish, but it doesn’t make much sense to me, still.

1 Like

There are quite a few assumptions in this thread. I haven’t been around much lately due to RL family reasons, but I thought I’d clarify how it works :slight_smile:

How connections work

The Mucklet server/core service does not care about WebSocket connections. That means, mobile clients does NOT need to keep an active connection to stay awake.

However, Mucklet relies on getting a ping message with regular intervalls (< 5 min between each ping) for each controlled character. If 5 min has passed since last ping was sent, the server will automatically put that character to sleep.

This also means that if a client stops pinging, the controlled characters will all fall asleep at different moments within a 5 min period, making it harder to guess if two characters belong to the same player.

This ping message can either be sent over WebSocket, or be made over HTTPS. The client currently does the pinging using HTTPS on a separate Web Worker thread. However, it is possible to have the client send the ping over WebSocket using this URL:
https://wolfery.com/?charPing.method=ws

So the mobile application needs to be able to make an HTTPS request every 3 min or so to keep characters awake (assuming the mobile client is the only client in use. If a desktop client is also running on some computer, that client will also do the pinging to ensure characters are kept awake).

How resources are handled

Any data, except for chat log events, will automatically resynchronize once the client reconnects after a disconnect. This resynchronization has no limit on how long you may be disconnected. This is a built in functionality of Resgate, and is done automatically by the ResClient.

That means, a mobile client may be disconnected for two days, but as soon as it gets reconnected, it will update awake lists, current room, looked on characters, mail, etc., to current state.

So, the mobile application has no need to keep any connection for this.

How chat log events are handled

Yes, @Riverrynn is correct about Mucklet working in a passthrough manner, and chat log events cannot be synchronized in the same manner as the other data. But as @farcaller points out, there IS already a server-side service that keeps chat log events for 24 hours. These logs are stored in-memory on that service, and will never be part of any backup.

The client will, when connecting/reconnecting to the server, fetch any missing log events that the log service might still keep in memory.

So, a mobile client already has access to these logs.

Conclusions

The only “connection issue” the mobile client has is that the mobile OS stops running the Javascript that sends the periodical HTTPS ping request from the webworker. Anything else regarding synchronization of data and chat event logs is already cared for.

If we can get a more reliable periodical pinging by either using Service Workers, turning the client into a PWA, or making it a mobile app using something like Cordova, then we are set.

Edit
If you try to force characters to always stay awake, all you need is a simple “client” that only does the pinging. This can already be done with the mucklet-bot code.
But, when players using that sort of ping bot close their browser tab without putting their characters to sleep, their characters will stay awake and idle maybe for days and months without any interaction. Not sure if that is desirable.

1 Like

I guess that’s the bold assumption I’m trying to work around. There seem to be at least a few people who use the service exclusively from a mobile device, and are thus slept pretty much everytime they sleep their phone or background their browser.

My thought would be to build it around the same principles that the browser currently does, if the app is full closed or idles for ‘too long’ (however long we decide that is, then the proxy would sleep the character(s) normally, as the user has left the server.

Yes and I love the work you’ve done on it, it just sounding like users that choose to exclusively use your app are going to fall into the same pitfalls of exclusively using the site on mobile, that is that everytime they tab out, their device is going to halt the connection, not ping the server, and get put to sleep. Is this something that can be done in the background on both OSes with a flutter app? Keep a background service to ping all awake characters while the app is in the background (not closed, that should do the same thing as closing the browser would). From what I understood Dart compiled down to JS, meaning it would essentially be a webview app, which falls under the same restrictions a browser page or a PWA would, in terms of background usage. If I’m wrong that awesome, we can devote our efforts to the mobile app.

Otherwise, it sounds like we may still be over-engineering it, and the ‘proxy server’ would only need to:

  • start pinging any of the user’s awake characters when the app is launched and logged in to.
  • keep the ping going even if the app is in the background
  • stop pinging when the user sleeps the character or the app is closed

And let everything else be handled by the front-end app.

Yes that was my assumption, most of the data synchronization issues were mostly from the proxy handling the requests on behalf of the client, but as I said, that just over complicated things.

Thank you all for your input. As usual some guy with a CS degree tries to over architect a solution we don’t need :rofl: I’ll just shut this topic for now and focus on helping out @farcaller’s mobile app if and where I can.

1 Like

Sorry I wasn’t clear – I made a barebones bouncer in addition to the app for some local testing. So effectively the proxy, you want. It will work with web with a couple modifications.

Only if you target web – it’s all “native” otherwise. Well, it’s bytecode but doesn’t matter much.

Unfortunately (or fortunately?) it’s never a webview app – even running on web (chrome) it will do all the painting in canvas. That’s what makes the google auth logins a bit more tricky.

You can do the proper backgrounding on android with flutter. iOS – no; we’ll need a server-side solution for that.

Which – at that point – is almost what an open browser tab on your PC does and it already works :slight_smile: point being, yeah, many people would want it to be mobile exclusive, but if someone has to host the bouncer for them at that point it’s more reasonable for @Accipiter to do that and at that point he can just improve the server for the mobile apps.

1 Like

Yeah, I just looked into it and it’s very straightforward on android: flutter_background | Flutter Package. It will destroy the user’s battery, of course, but who cares about those issues :slight_smile:

1 Like

Apparently iOS has easy ways in native, so maybe later we can migrate to that if needed.

Wonderful, that makes things a lot better then what I had feared.

Well… Hopefully a single (per awake character…) http request once every three minutes shouldn’t be too bad for battery… Maybe background usage can be disabled if folks have issues with battery life and don’t mind falling asleep while idle. I’m sure other apps we all use daily do more in the background than that.

1 Like

Which are those? AFAIK there are no compliant methods for this app category.

1 Like

I found it in reading SO comments of people trying to figure out how to do it in Android, and said that the iOS equivalent is easy, they may have meant something like this Apple Developer Documentation

1 Like

That’s not quite the same; your app will still be suspended and the background task will have to reopen the WS connection subject to state/data loss :frowning: that’s why we need a server change for that.

1 Like

Isn’t that something that the server already handles?

If the websocket connection is closed, but pings are still sent periodically, when the websocket connection is reopened, wouldn’t the client be flooded with missed content?

1 Like

Not really. Only chat log events are “buffered” up to 24 hours. But yes, if your character has been in a spammy room, they will indeed get a rather big chunk of chat log events.

All other state modifying events (adds to/removes from Awake list, area population changes, description changes, etc.) are not buffered. The client/RES protocol does not rely on events for resynchronization on reconnect, but instead refetches the subscribed data and compares it to the locally cached versions. If there are differences, events will be generated locally to mutate the stale data into the new state.

Example:

  1. Client connects and fetches Awake list
  2. Client gets list: A, B, C
  3. Client gets disconnected.
  4. Client reconnects and tries to resynchronize by fetching the Awake list again
  5. Client gets list: B, C, D
  6. Client will compare the stale list (A, B, C) with the newly fetched one (B, C, D), and generate a local remove event for value at index 0 (A), and an add event for value D at index 2.

So the “cost” of a reconnect is the same as an initial connection. But no event flooding.

1 Like