[Discussion] Bots for interactivity

First, I’m not entirely certain this is the right place for this post, because it’s not to do with areas so much as things to do in them. However, I’ve developed my python client and bot structure enough that I’m wanting to start actually deploying them for fun things around in the world (instead of previously just, say, mapping Sinder and making dildo jokes).

So, there’s two parts to this- I’m going to lay out some of the current things I’m working on and their intended placements, and I’m seeking feedback.

As of now, I’ve got assets to make a bot navigate around the publicly accessible areas throughout the whole map, watch an area and notify someone when a new person arrives, execute a dialog tree menu, store persistent data (right now a map, but could be logs of any kind, really), roll dice, say words, and perform action sequences.

Right now, I have a couple prospective projects that I think will be fun and add to the spirit of the world:

  • A bartender bot you can order drinks from and it’ll make them step by step (and probably other similar flavor interaction ones)
  • A whole suite of ones you can play games with (so far I’ve got tic-tac-toe, wordle, checkers, and maybe something like Zork if I’m feeling really ambitious)
  • A casino, with chip counts and everything, or perhaps a general purpose gambling bot for those of you who like to play out wagers
  • A very ambitious one based on a conversation had in Station Park, where a bot runs as a performer (such as acting a play or a film from the script)
  • A general utility bot for giving directions, leaving messages for people, escrowing links, greeting guests, and the like

Given those capabilities- and the ideas I’ve laid out that give a sense of what should be achievable- I’d love to hear folks’ thoughts. Which ones do you think would be most enjoyable to add in? Which ones might not be ideal for the game itself, even if they seem cool to me? Other, more specific ideas you’d like for the ones I outlined above; and what things beyond those ideas do you think could enrich the world?

To start I think this is exactly the right area - we discuss matters like this here. :slight_smile:

Bots kinda creep me out with their potential for creepiness.

I think this is a good discussion to be having now that the technology is in place to make it possible.

I think there are a lot of interesting uses for bots as ways to bring more “smart” behavior into Wolfery, but there’s also a lot of “potential for creepiness,” to borrow an excellent phrase.

I suspect we’re going to need to establish some guidelines for what is and isn’t OK, so we can make sure we don’t accidentally build creepiness without meaning to.

My first instinct is that to personally not feel creeped out by bots in general, I’d want a few things to always be true:

  • Bots should always be clearly identifiable as bots if you check.
  • Bots should always describe what they’re for and who maintains them.
  • Bots should always describe what information they’re recording.

So, to use the bartender bot from the original post as an example of my proposed guidelines, I could imagine putting something like this in the “about” section:

I’m The Bartender, an automated bot.
I was built by FuzzyFox.
I stay here in the bar and serve drinks if you ask for them. To ask for a drink, address me and ask. For example: @Bartender=Make me a martini.
I only respond to messages that address me. I keep a log of the messages that I process, and I don’t store anything else.

I feel like something like this would help me decide, when looking at a bot in a room, whether I’m concerned about what I say or do around that bot.

Beyond that, there’s also a much bigger discussion about what sorts of things we are comfortable with having automated, and I think that’s a larger topic and one that different areas may have different guidelines for. But my gut feeling is that some amount of basic labeling would go a long way toward making that conversation possible. (I also think this is a good, easy way for a bot-author to test whether folks are creeped out by their idea without writing code. Start by putting together the quick description of what the bot would say it is and does, and see what folks here on the forums think before actually building it all.)

1 Like

I had mentioned this with the other staff but I’ll bring it out here now that it’s being discussed. In my mind there’d be a tag (similar to that of builder, moderator, or pioneer) that showed off that the bot was a bot, and could even grant some small extended privileges.

Maybe if folks are concerned, this tag would be required for the bot API to allow log in on non-test (or otherwise protected) realms, meaning they could log in at test.mucklet.com, but not wolfery.com.

Part of that vetting process could be:

  • A requirement for open source bot code, using proprietary code should be fine if the bot itself is open enough for the savvier among us to figure out what’s happening.
  • A commitment to explicitly describing in IC-reachable venues what the bot does, what it records, etc. (This could even be enforceable via code, making sure descriptions are set correctly on bot startup or something of the sort)
  • Passing some sort of automated testing possibly? I’m not very good at unit testing or software QA, but I’m sure us programmers could figure out some form of testing suite.
  • A commitment to the same standards of privacy and ethics the site proper has. I know part of the topic Help crash the server information leaking shouldn’t be possible, but security holes are bound to be found, and maybe even used maliciously before they can be patched up. A code of conduct for bots should at least help mitigate it happening from that part of the site.
  • Actually, that topic serves as a good starting point for such a code of conduct. All those things @Accipiter asks us to break in the test server, yea, don’t try that on prod, or your bot privileges will be revoked.

I think as long as we have a set of rules the same as we do for people (and bots would obviously have to run on the same rules that real people do, and then some) then I think that there’s a lot that’ll iron itself out on its own.

I am reading this topic with interest. Lurk lurk.
But I also want to point out the following ToS text under Permitted use. You are not allowed to:

create or use bots accessing the Service for any other purpose than improving game play

Logging data for any benefit beyond the players using the bot is not considered “improving game play”.

Okay, so. I was hesitant to define ‘creepiness’ as it is highly subjective. And genuine interest can, and has often come off as bad faith creepy behavior.

I think creepiness, for me. Is someone having an undue amount of information about your character/player having never interacted.

Now, this touches on the subject of privacy. Which is hecking WEIRD on muck. Because having undue information is allowed in certain circumstances like…

  • You are allowed to have secret alts.
    Now, I value this. As people have different interests, want to play different things. In the past though, this has brought up many situations where one character/player will have an undue amount of information about me and will use that information to get their hooks in me. (Or completely ignore me). Now, as an alt player, you try and avoid this type of metagaming, but it is inevitable.
  • Gossip is allowed, but highly frowned upon.
    I always try to respect people’s privacy to the fullest, to the point where I have drawn a huge adversion to ever mentioning character’s names when I talk about what experiences I’ve had unless they are in the room. But, this can turn into me not being able to say what I did last weekend. What my character is up to, anything.
  • Your presence to the server is announced, and your activity monitored
    So people know if you’re there. There’s even a watchfor list that will announce when you connect. Often, just knowing this can be a touch creepy. Getting jumped on the moment you log in, someone asking you if you’re ‘busy’. You’d want to just have people interact with you when you’re out and about. But people know if you’re there. If you’re active, can get information from just this.

Where this kind of stuff can get scary is when it gets automated, and I wanted to clarify some of the danger because. Even if these particular bits aren’t against the rules, maybe even enhance roleplay. It leads to…

  • Figuring out your active times/days
    Just knowing this, you can often sus out people’s secrets. Like who their alts are. Having a bot online all the time logging this is a big no no for me.
  • Figuring out who you interact with, mapping out your social group
    People will needle who you hang out with, try and sus out secret interests. This can get dark and creepy fast.
  • Collecting notes on you.
    We all share private information from time to time. We want to feel open to do so. The lines of IC and OOC can get blurry, and having people collect notes. Eugh, I’ve heard horror stories of this.
  • People watching you no matter where you go.
    If a bot can trawl around, figure out where you are, if you’re even in public. You can sus out who you’re with. A lot of dark things I don’t wanna think about. Some players will spread characters out to multiple rooms to do just this and you can know nothing about it.

Now, none of this is particularly against any RULES. Except maybe, ‘be cool to each other’. But it’s so hecking subjective. If I see any bots or scripts, I want a hard line in being able to avoid them or seeing their internals so I can trust it.

The ‘potential for creepiness’ is exactly why I wanted to seek feedback! I feel like there’s both a technical and a personal side to the requirements. I’ve been trying very hard to lean on the side of being polite, to the point of acknowledging such in my software’s help and documentation even. However, I also feel that that is essentially a paper-thin protection. I’m trying very hard to temper my excitement about the possibilities with a good-faith estimation of what does and does not impact other players’ quality of life in the game, and with an aim towards what solely improves that QoL.

That said, even excluding malicious usage of the API, I can still see cases where an over-enthusiastic user could, through ignorance of socially acceptable uses of automation or simple lack of ability with software engineering, create something which ranges from ‘mildly annoying’ all the way up through ‘inducing unplayability’.

Further, given that my background is explicitly in AI research (and with a lot of specific experience in the medical field) I have been coming to this project with direct considerations related to the etiquette for development-phase work, as well as privacy and informed consent based on HIPAA regulations.

Pursuant to all that, and informed by my basic forays into UX and functionality testing thus far, I’d like to further complicate the discussion by laying out of few of the things I’ve done, and relate some salient experiences related to them that might effect the way we handle these issues going forward. There’s going to be a wall of text here, but I think it’s important.

  • With regards to informed consent: my sense of propriety and prior standards of contact for human/robot interaction testing lead me to intuit some necessary points on informing people and communicating:
  1. A bot tag is definitely important, and I personally felt it was critical to make ‘bot’ the only permissible tag, to reduce the chance that other tag clutter distracted people looking.
  2. I felt it was important to make it clear in the description who made it and what for, and realized that there should be a clear OOC indication that the bot is not a standard character
  3. It was immediately apparent that responding to non ‘@’ or ‘msg’ contacts would be difficult, verging on intractable to both implement and monitor, and that a user-made bot should not be able to react to ‘say’ interactions (the potential for error, abuse, and confusion is just too great)

Some observations on this point:
*People seem to struggle with the IC/OOC distinction between the bot, the creator, anyone who happens to be chaperoning the bot at the time, and practical interactions, and there should probably be rules regarding both the play aspects and required presentation.
*People do not seem to internalize an instruction to use the recommended ‘help’ function outlined in the bot’s ‘about’. I realize that lots of people don’t really read a character’s ‘about’, and of course every designer should assume users won’t read the manual. A mechanism for establishing the standards of behavior for identifying, interacting with, and playing alongside bots might be a good cultural expectation.
*I think we need to have a discussion about how a bot may and may not prompt a user- people who try to ‘say give me a drink’ at the bartender bot and get no response would be missing out on the fun not knowing that you’ve got to @ bots. Some options that occur to me are occasional, IC-appropriate announcements (such as ‘Can I get anyone anything to drink ((ooc: @ me with your request))’ on condition that new characters are in the room and at least X seconds have passed since the last announcement has passed) or whispers or messages to new characters encountering the bot (which begs the question of saving a note of “so and so has been informed of the bot” in the bot’s files too much saved data? More on that later.)
*It seems obvious to me, but there should absolutely be a required period of time where the bot must be chaperoned by an active player who is directly monitoring it (and not in the ‘playing, but going back and forth between characters, tabs, or other activities’ way we all do sometimes that leads to 2 minute response times) before it can be left running without supervision

  • There are also a suite of technical considerations I’ve been looking at how to balance, which I think inform the discussion on requirements for who can make bots because of the finesse required, and what sorts of behaviors the bot might be required as a result of decency standards.
  1. I’ve been applying standards of my technical field to the bot’s performance, which not every user who might be interested in playing with the API has. However, restricting that access could stifle creativity and growth. A system similar to how building and linking works would be helpful, without a doubt.
  2. Testing of very open-ended systems is tough. A lot of modern software engineering relies on unit testing, but even defining the basic test suite for an AI driven model can be intractable. That’s why I think an open-source it, vet it, chaperone period model for acceptance should be necessary
  3. Because the nuances of the artistic programming required to build stable, but flexible, bots is a fairly high bar, it seems to me like some sort of graduated privileges system for the bots would be ideal- for instance, if bots made by non-bot-maker players would not be allowed to ‘say’, or ‘message’ non-owned characters on the non-test site until approved by a bot-maker.

Based on those observations, I have a few notes as well:
*Rate limiting is important: I’ve been paying close attention to ensuring that the bot performs any visible actions at a speed commensurate with a human players- for instance, in the example above not making the ‘anyone want a drink’ prompt every single time a character enters, or three times in a row when two characters following one other all enter at once. There’s a lot of subtleties there that require skillful execution to be IC-appropriate and improve the game, rather than make it annoying.
*Contact volume is also important: a bot that says too much at a time is going to get ignored. In my basic trials recently, I’ve noticed that people tend to only read a little bit of the help file, for instance, and miss out of bits like whether the bot is currently taking messages or @s, for instance. The same principle applies to a bot that you could talk to, but what if the chat engine produces, as they do sometimes, three paragraphs of text? It’ll look rude and break people out of the spirit.
*Appropriate responses that don’t break suspension of disbelief is absolutely critical. It would be easy for novices to implement bots which, basically, fail to ‘yes-and’ roleplayers. My current draft of the bartender bot is starting off with several hundred lines of kinds of drinks, pattern matching for ordering, and similar IO formatting- entirely because as I got through I keep thinking of other branch possibilities and increasing the size of the system to account for them in a verisimilitudinous way. There certainly needs to be a sort of quality standard rubric, same as there is for builders, for that sort of interaction- not everyone will come to the process with the right programming ethos to enforce that QA them self.

Finally, there’s a couple of prospective points I’d like to raise:

  • One of the initial modules I though of was a ‘find so-and-so’ bot that’d go looking for a character. I scrapped that one, and it was half the reason for me to start this discussion because a) it definitely erred on the ‘creepy’ side, and b) it seemed like it violated IC privacy, vis-a-vis the implicit ludonarrative goal imposed by only being able to ‘look’ at characters in the room. I think a sort of operational style guide for how the bot should ‘seem’ IC along with OOC requirements for programming openness and QA would be a strong benefit for the spirit of the game, addressing thematic and RP concerns like this.
  • I’d love to bring some more sophisticated operation, namely AI and Machine Learning, into the world. I think it could offer a level of richness you don’t usually see in RP spaces like this, and is uniquely supported by the structure of Wolfery’s story, design, and backend. However- that draws the ‘logging data’ question into sharp focus. Is a classical AI that remembers who it met and what sort of things they’ve talked about too far over the line? Is a machine learning system that doesn’t save explicit data, but does update its model with abstractions too much? Is even the ‘casino’ idea I pitched too much if it keeps a register of a character’s wins and losses? I think there’s an enormous potential benefit, and also an extreme risk associated with this- Wolfery is quite uniquely positioned to benefit from systems like that, but the question of access control, permission, and vetting is really important, to balance risk, free creativity, and richness.

A quick edit to discuss some of @Harcourt’s extremely good points, since that post went up near simultaneously! One of the key issues in regulation of AI is in restricting that sort of persistent logging and analysis that can enable people to discover information that might otherwise be considered in the domain of ‘private knowledge’. The issue is that some of that is entirely the same sort of thing that a stalker can do- just magnified in impact because a robot can do it faster, better, and without resting. A key starting point is that if the code is open-sourced, hiding that sort of behavior will be hard, as a baseline would be that any software that makes calls that record data (or calls to an obfuscated other system) would be automatically rejected. Of course, that requires technical expertise on the part of the vetter, as I alluded to in some of my points above. The construction of a specific approval requirements document is out of the scope of this post, but creating a standard that includes provisions like limitations on what calls can be made, what external libraries can be used, and the like will be essential.