I've tried Horizon Worlds on the Meta Quest, and the implementation of this feature is fucking awful. A lot of notification spam cluttering the screen that you can't dismiss.
The level of safety varies from instance to instance. If the AI detects someone being disruptive, it asks if you want to mute them, even if they're far away from you. You're notified when your voice radius is extended when AI deems you to not be a threat. You can garble people's voices like in that episode of Black Mirror.
Basically, you're constantly reminded that you're being recorded.
Every software platform is essentially a digital panopticon and as AI gets deployed more widely in the real world this will also be increasingly true for non-software interactions. My guess is everyone is eventually going to carry an AI "assistant" that records all signals and gives guidance to its owner just like most people in the developed world today carry a cell phone and a credit card.
This application of AI is the one I'm most terrified by. Even in the benign use it's training a bunch of people into censoring themselves into a specific way of speaking.
The only way I can think of this being justified is if it's run and controlled by the user themself so they can control what they hear. Not to control how others speak.
You can't run a public service without moderation, unless you want to end up with a toxic swamp. Definitely not a kids game. Typically it's worked around by not allowing player to player chat at all, or making it extremely limited, but even then you get workarounds (see dark souls with "try finger, but hole")
Moderation is trying to enforce some standards for the whole environment. Even if you got the tools to implement filtering for yourself, you're still sitting there with people who are happy with the abusive environment and will get through one way or another. And you get them everywhere around you. It's not something that each player can deal with separately unless they opt into private groups - but that's not the environment Roblox is encouraging.
Why? Surveillance and voice recognition has been possible for a long time. Low cost triggering on phrases without full analysis as well. There's no point using a system like this for real-time processing if you can capture the stream for batch processing later. This system even removed some transformer layers to sacrifice accuracy for lower latency.
Basically, if anyone wanted to do surveillance like this, they were always doing it with public or secret implementations. This won't massively help anyone.
I disagree. I think the fact that companies like Roblox still struggle with moderating discussions and are doing research into this area proves the point that it's not as solved a problem as you are stating it is.
Depending on what the desired level of "allowed exposure" is, manual reports + reviews can also work, without having up-front always-on live redaction of the audio feed.
E.g. while it's not perfect, Valorant's audio chats are a lot less toxic than many comparable FPS games via such a system. Of course that still won't protect you from one-off hearing slurs.
So that + elective client side filtering (if you really never want to experience profanity), seems like the best of both worlds.
manual reports + reviews requires recording the feed for later review; real-time filtering does not. A number of other commenters have mentioned that being recorded bothers them; those folk should be arguing for the real-time AI moderator.
I'm not sure how many people are interested in not hearing abuse -vs- not playing with abusive people. If someone called your name and then you heard bleeping, you know exactly what happened anyway. Is there actually a group of people happy with that result?
> Even in the benign use it's training a bunch of people into censoring themselves into a specific way of speaking.
Imagine a place on the internet where your speech is moderated, like a place you’re not forced to use and has rules about what you can and can’t post. Speech that goes against the rules could be hidden or removed and the users of the service train themselves to speak in a certain way when they are using it.
Perish the thought, that sounds like a dang dystopian Netflix show
This is like saying you saw a No Shirt No Shoes No Service sign at a restaurant and are worried that men with guns might come and enforce that rule in your home. Would that be terrible? Yes. Does the fact that the sign exists mean that you’re fated to nail it to your own wall? No.
Eh...as long as it's used in environments where people want to have a good time, I don't care. I know a lot will stomp their feet and talk about precedents and dangerous steps but having played games on euw servers for 25 years now. I am all for it.
Like hanafuda cards isnt this just going to cause people to “hide” the policy violating speech behind new slang?
> In 1648, Tenshō Karuta were banned by the Tokugawa shogunate.[13] During prohibition, gambling with cards remained highly popular which led to disguised card designs. Each time gambling with a card deck of a particular design became too popular, the government banned it, which then prompted the creation of a new design. This cat-and-mouse game between the government and rebellious gamblers resulted in the creation of increasingly abstract and minimalist regional patterns (地方札). These designs were initially called Yomi Karuta after the popular Poch-like game of Yomi which was known by the 1680s.[14]
Probably, a more recent example would be the language developed to evade (possibly imagined) censorship in the TikTok algorithm, which conditioned users to start saying "unalived" instead of "killed" and so on.
Discussing the effectiveness of the technique requires believing that the system has a legitimate purpose and was made with genuine intent.
Is it good to automatically record, transcribe, analyze, and censor all private conversations? Is their goal even censorship, or is that a false 'child safety' justification for mass surveillance?
How is ROBLOX adjusting content moderation for 17+ experiences? I see one of the tensors in the model is "DatingAndSexting", but in one of their developer conferences ROBLOX said they expect (age-verified with ID) 17+ dating experiences to be in their future.
If ROBLOX is not splitting up dating vs sexting, I see major problems with that. In an age-verified 17+ experience, I should be able to ask someone to go on a date with me. I showed my passport to ROBLOX to get in after all. Ditto for "Profanity". 17+ experiences can have strong language, unless it's directed at someone else.
The guide says that I can say "fuck", but not tell someone to "fuck off". So, does "fuck off" fall under profanity or bullying?
Right now, I have a lifetime premium account for ROBLOX from 2008. The rumours I hear is that I'm on a hair trigger for bans because Roblox is trying to clear out lifetime premiums since they cost the company too much money.
It would make me feel more comfortable with using voice chat in 17+ experiences if I had more insight into the machine learning classifiers. What are the thresholds Roblox is using in production? Do they differ based on experience or the user? Can I easily run this model locally, talk some trash into the mic, and see what the boundaries are?
Right now, I avoid using voice chat despite having verified because I don't want to lose my valuable account. It would make me feel better if I had an idea of what the boundaries were, so I could avoid them.
One of the few things I know about Roblox is that it's moderation is as good as a robo-support agent with a pair of dice. I guess that system is open source now?
paid for with revenue from years of predatory sales tactics, exploting defencies in Apple's parental controls by tricking children into buying things without their parents permission then refusing to refund them. Fuck these parasites.
I've tried Horizon Worlds on the Meta Quest, and the implementation of this feature is fucking awful. A lot of notification spam cluttering the screen that you can't dismiss.
The level of safety varies from instance to instance. If the AI detects someone being disruptive, it asks if you want to mute them, even if they're far away from you. You're notified when your voice radius is extended when AI deems you to not be a threat. You can garble people's voices like in that episode of Black Mirror.
Basically, you're constantly reminded that you're being recorded.
Every software platform is essentially a digital panopticon and as AI gets deployed more widely in the real world this will also be increasingly true for non-software interactions. My guess is everyone is eventually going to carry an AI "assistant" that records all signals and gives guidance to its owner just like most people in the developed world today carry a cell phone and a credit card.
>like in that episode of Black Mirror.
that's never a sentence you want to hear about your product
This application of AI is the one I'm most terrified by. Even in the benign use it's training a bunch of people into censoring themselves into a specific way of speaking.
The only way I can think of this being justified is if it's run and controlled by the user themself so they can control what they hear. Not to control how others speak.
You can't run a public service without moderation, unless you want to end up with a toxic swamp. Definitely not a kids game. Typically it's worked around by not allowing player to player chat at all, or making it extremely limited, but even then you get workarounds (see dark souls with "try finger, but hole")
Moderation is trying to enforce some standards for the whole environment. Even if you got the tools to implement filtering for yourself, you're still sitting there with people who are happy with the abusive environment and will get through one way or another. And you get them everywhere around you. It's not something that each player can deal with separately unless they opt into private groups - but that's not the environment Roblox is encouraging.
I'd rather not have any moderated discussions than give totalitarian dictatorships the ability to monitor all communication in real time.
Don't use Roblox then. I guarantee every parent ever would rather their kids have access to a moderated Roblox than an unmoderated one, though.
This analogy doesn't make sense for a game designed exclusively for kids where the "totalitarian government" is the company that runs the game.
There's no analogy. Like most technology this is dual use and can easily be repurposed for mass surveilance.
Voice recognition and saving audio streams is not new. It has basically nothing to do with this tech. It's not getting repurposed for surveillance.
> It's not getting repurposed for surveillance.
This absolutely will be used for surveillance. It's literal express purpose is surveillance and automated behavior modification.
Why? Surveillance and voice recognition has been possible for a long time. Low cost triggering on phrases without full analysis as well. There's no point using a system like this for real-time processing if you can capture the stream for batch processing later. This system even removed some transformer layers to sacrifice accuracy for lower latency.
Basically, if anyone wanted to do surveillance like this, they were always doing it with public or secret implementations. This won't massively help anyone.
I disagree. I think the fact that companies like Roblox still struggle with moderating discussions and are doing research into this area proves the point that it's not as solved a problem as you are stating it is.
Depending on what the desired level of "allowed exposure" is, manual reports + reviews can also work, without having up-front always-on live redaction of the audio feed.
E.g. while it's not perfect, Valorant's audio chats are a lot less toxic than many comparable FPS games via such a system. Of course that still won't protect you from one-off hearing slurs.
So that + elective client side filtering (if you really never want to experience profanity), seems like the best of both worlds.
> manual reports + reviews can also work
manual reports + reviews requires recording the feed for later review; real-time filtering does not. A number of other commenters have mentioned that being recorded bothers them; those folk should be arguing for the real-time AI moderator.
I'm not sure how many people are interested in not hearing abuse -vs- not playing with abusive people. If someone called your name and then you heard bleeping, you know exactly what happened anyway. Is there actually a group of people happy with that result?
> Even in the benign use it's training a bunch of people into censoring themselves into a specific way of speaking.
Imagine a place on the internet where your speech is moderated, like a place you’re not forced to use and has rules about what you can and can’t post. Speech that goes against the rules could be hidden or removed and the users of the service train themselves to speak in a certain way when they are using it.
Perish the thought, that sounds like a dang dystopian Netflix show
Now imagine if that place was everywhere at all times because it's completely automated and cheap.
Imagine if optional things on Roblox became mandatory in society… spooky stuff
Is communication optional? You are restricting this to only being used by Roblox, it will be used by everyone.
But yes, please keep defending the development of mass surveillance technology.
This is like saying you saw a No Shirt No Shoes No Service sign at a restaurant and are worried that men with guns might come and enforce that rule in your home. Would that be terrible? Yes. Does the fact that the sign exists mean that you’re fated to nail it to your own wall? No.
Eh...as long as it's used in environments where people want to have a good time, I don't care. I know a lot will stomp their feet and talk about precedents and dangerous steps but having played games on euw servers for 25 years now. I am all for it.
Double plus ungood.
Direct link to the article is here: https://research.roblox.com/tech-blog/2024/07/deploying-ml-f...
Like hanafuda cards isnt this just going to cause people to “hide” the policy violating speech behind new slang?
> In 1648, Tenshō Karuta were banned by the Tokugawa shogunate.[13] During prohibition, gambling with cards remained highly popular which led to disguised card designs. Each time gambling with a card deck of a particular design became too popular, the government banned it, which then prompted the creation of a new design. This cat-and-mouse game between the government and rebellious gamblers resulted in the creation of increasingly abstract and minimalist regional patterns (地方札). These designs were initially called Yomi Karuta after the popular Poch-like game of Yomi which was known by the 1680s.[14]
Via https://en.m.wikipedia.org/wiki/Hanafuda
Probably, a more recent example would be the language developed to evade (possibly imagined) censorship in the TikTok algorithm, which conditioned users to start saying "unalived" instead of "killed" and so on.
funnily enough, there's a Roblox meme where people say "go commit die" because "commit suicide" triggers the censorship algorithm.
Discussing the effectiveness of the technique requires believing that the system has a legitimate purpose and was made with genuine intent.
Is it good to automatically record, transcribe, analyze, and censor all private conversations? Is their goal even censorship, or is that a false 'child safety' justification for mass surveillance?
How is ROBLOX adjusting content moderation for 17+ experiences? I see one of the tensors in the model is "DatingAndSexting", but in one of their developer conferences ROBLOX said they expect (age-verified with ID) 17+ dating experiences to be in their future.
https://www.reddit.com/r/roblox/comments/16dlijo/roblox_aims...
If ROBLOX is not splitting up dating vs sexting, I see major problems with that. In an age-verified 17+ experience, I should be able to ask someone to go on a date with me. I showed my passport to ROBLOX to get in after all. Ditto for "Profanity". 17+ experiences can have strong language, unless it's directed at someone else.
https://devforum.roblox.com/t/17-experiences-can-add-support...
The guide says that I can say "fuck", but not tell someone to "fuck off". So, does "fuck off" fall under profanity or bullying?
Right now, I have a lifetime premium account for ROBLOX from 2008. The rumours I hear is that I'm on a hair trigger for bans because Roblox is trying to clear out lifetime premiums since they cost the company too much money.
It would make me feel more comfortable with using voice chat in 17+ experiences if I had more insight into the machine learning classifiers. What are the thresholds Roblox is using in production? Do they differ based on experience or the user? Can I easily run this model locally, talk some trash into the mic, and see what the boundaries are?
Right now, I avoid using voice chat despite having verified because I don't want to lose my valuable account. It would make me feel better if I had an idea of what the boundaries were, so I could avoid them.
One of the few things I know about Roblox is that it's moderation is as good as a robo-support agent with a pair of dice. I guess that system is open source now?
What an exciting application of AI
...
paid for with revenue from years of predatory sales tactics, exploting defencies in Apple's parental controls by tricking children into buying things without their parents permission then refusing to refund them. Fuck these parasites.
Oooh, now I want this in a browser extension (or hey, even a system-wide audio filter) to automatically censor profanity in e.g. YouTube videos.
What would you use it for, a kid?
Yeah I cant imagine a use case for this
[flagged]