Instance Admins: Check Your Instance for Vote Manipulation Accounts [PSA]
submitted Admiral Patrick edited
byOver the past 5-6 months, I've been noticing a lot of new accounts spinning up that look like this format:
What are they doing?
They're boosting and/or downvoting mostly, if not exclusively, US news and politics posts/comments to fit their agenda.
Edit: Could also be manipulating other regional news/politics, but my instance is regional and doesn't subscribe to those which limits my visibility into the overall manipulation patterns.
What do these have in common?
1) Most are on instances that have signups without applications (I'm guessing the few that are on instances with applications may be from before those were enabled since those are several months old, but just a guess; they could have easily just applied and been approved.)
1) Most are random 8-character usernames (occasionally 7 or 9 characters)
1) Most have a common set of users they're upvoting and/or downvoting consistently
1) No posts/comments
1) No avatar or bio (that's pretty common in general, but combine it with the other common attributes)
1) Update: Have had several anonymous reports (thanks!) that these users are registering with an @sharklasers.com
email address which is a throwaway email service.
What can you, as an instance admin, do?
Keep an eye on new registrations to your instance. If you see any that fit this pattern, pick a few (and a few off this list) and see if they're voting along the same lines. You can also look in the login_token
table to see if there is IP address overlap with other users on your instance and/or any other of these kinds of accounts.
You can also check the local_user
table to see if the email addresses are from the same provider (not a guaranteed way to match them, but it can be a clue) or if they're they same email address using plus-addressing (e.g. user+whatever@email.xyz, user+whatever2@emai.xyz, etc).
Why are they doing this?
Your guess is as good as mine, but US elections are in a few months, and I highly suspect some kind of interference campaign based on the volume of these that are being spun up and the content that's being manipulated. That, or someone, possibly even a ghost or an alien life form, really wants the impression of public opinion being on their side. Just because I don't know exactly *why* doesn't mean that something fishy isn't happening that other admins should be aware of.
Who are the known culprits?
These are ones fitting that pattern which have been identified. There are certainly more, but these have been positively identified. Some were omitted since they were more garden-variety "to win an argument" style manipulation.
These all seem to be part of a campaign. This list is by no means comprehensive, and if there are any false positives, I do apologize. I've tried to separate out the "garden variety" type from the ones suspected of being part of a campaign, but may have missed some.
[New: 9/18/2024]: https://thelemmy.club/u/fxgwxqdr
[New: 9/18/2024]: https://discuss.online/u/nyubznrw
[New: 9/18/2024]: https://thelemmy.club/u/ththygij
[New: 9/18/2024]: https://ttrpg.network/u/umwagkpn
[New: 9/18/2024]: https://lemdro.id/u/dybyzgnn
[New: 9/18/2024]: https://lemmy.cafe/u/evtmowdq
https://leminal.space/u/mpiaaqzq
https://lemy.lol/u/ihuklfle
https://lemy.lol/u/iltxlmlr
https://lemy.lol/u/szxabejt
https://lemy.lol/u/woyjtear
https://lemy.lol/u/jikuwwrq
https://lemy.lol/u/matkalla
https://lemmy.ca/u/vlnligvx
https://ttrpg.network/u/kmjsxpie
https://lemmings.world/u/ueosqnhy
https://lemmings.world/u/mx_myxlplyx
https://startrek.website/u/girlbpzj
https://startrek.website/u/iorxkrdu
https://lemy.lol/u/tjrwwiif
https://lemy.lol/u/gmbpjtmt
https://thelemmy.club/u/avlnfqko
https://lemmy.today/u/blmpaxlm
https://lemy.lol/u/xhivhquf
https://sh.itjust.works/u/ntiytakd
https://jlai.lu/u/rpxhldtm
https://sh.itjust.works/u/ynvzpcbn
https://lazysoci.al/u/sksgvypn
https://lemy.lol/u/xzowaikv
https://lemy.lol/u/yecwilqu
https://lemy.lol/u/hwbjkxly
https://lemy.lol/u/kafbmgsy
https://discuss.online/u/tcjqmgzd
https://thelemmy.club/u/vcnzovqk
https://lemy.lol/u/gqvnyvvz
https://lazysoci.al/u/shcimfi
https://lemy.lol/u/u0hc7r
https://startrek.website/u/uoisqaru
https://jlai.lu/u/dtxiuwdx
https://discuss.online/u/oxwquohe
https://thelemmy.club/u/iicnhcqx
https://lemmings.world/u/uzinumke
https://startrek.website/u/evuorban
https://thelemmy.club/u/dswaxohe
https://lemdro.id/u/efkntptt
https://lemy.lol/u/ozgaolvw
https://lemy.lol/u/knylgpdv
https://discuss.online/u/omnajmxc
https://lemmy.cafe/u/iankglbrdurvstw
https://lemmy.ca/u/awuochoj
https://leminal.space/u/tjrwwiif
https://lemy.lol/u/basjcgsz
https://lemy.lol/u/smkkzswd
https://lazysoci.al/u/qokpsqnw
https://lemy.lol/u/ncvahblj
https://ttrpg.network/u/hputoioz
https://lazysoci.al/u/lghikcpj
https://lemmy.ca/u/xnjaqbzs
https://lemy.lol/u/yonkz
Edit: If you see anyone from your instance on here, *please please please* verify before taking any action. I'm only able to cross-check these against the content my instance is aware of.
Comments are disabled.
We have our own astroturfing bots, did we make it?
I believe "Russian Bot Farm Presence" is the preferred metric of social network relevance in the scientific community.
Lol, that sounds like a Randall Munroe unit of measurement, and I love it. If there's not already an xkcd for that, there should be.
Make it harder to moderate? Sure!
I hope this post doesn't tank the monthly active users stats lol. Mostly that's me hoping this problem isn't as big as I fear.
Oooh, good point. That would mess with Lemmyverse data, which would be annoying for discovery
What surprises me is that these seem to be all on other instances - including a few big ones like just.works - rather than someone spinning up their own instance to create unlimited accounts to downvote/spam/etc.
Not really: if you're astroturfing, you don't do all your astroturfing from a single source because that makes it so obvious even a blind person could see it and sort it out.
You do it from all over the places, mixed in with as much real user traffic as you can, and then do it steadily and without being hugely bursty from a single location.
Humans are *very good* at pattern matching and recognition (which is why we've not all been eaten by tigers and leopards) and will absolutely spot the single source, or extremely high volume from a single source, or even just the looks-weird-should-investigate-more pattern you'd get from, for example, exactly what happened to cause this post.
TLDR: they're doing this because they're trying to evade humans and ML models by spreading the load around, making it not a single source, and also trying to mix it in with places that would also likely have substantial real human traffic because uh, that's what you do if you're hoping to not be caught.
lol hahahahaha
After digging into it, we banned the two sh.itjust.works accounts mentioned in this post. A quick search of the database did not reveal any similar accounts, though that doesn't mean they aren't there.
My bachelor's thesis was about comment amplifying/deamplifying on reddit using Graph Neural Networks (PyTorch-Geometric).
Essentially: there used to be commenters who would constantly agree / disagree with a particular sentiment, and these would be used to amplify / deamplify opinions, respectively. Using a set of metrics [1], I fed it into a Graph Neural Network (GNN) and it produced reasonably well results back in the day. Since Pytorch-Geomteric has been out, there's been numerous advancements to GNN research as a whole, and I suspect it would be significantly more developed now.
Since upvotes are known to the instance administrator (for brevity, not getting into the fediverse aspect of this), and since their email addresses are known too, I believe that these two pieces of information can be accounted for in order to detect patterns. This would lead to much better results.
In the beginning, such a solution needs to look for patterns first and these patterns need to be flagged as true (bots) or false (users) by the instance administrator - maybe 200 manual flaggings. Afterwards, the GNN could possibly decide to act based on confidence of previous pattern matching.
This may be an interesting bachelor's / master's thesis (or a side project in general) for anyone looking for one. Of course, there's a lot of nuances I've missed. Plus, I haven't kept up with GNNs in a very long time, so that should be accounted for too.
Edit: perhaps IP addresses could be used too? That's one way reddit would detect vote manipulation.
[1] account age, comment time, comment time difference with parent comment, sentiment agreement/disgareement with parent commenters, number of child comments after an hour, post karma, comment karma, number of comments, number of subreddits participated in, number of posts, and more I can't remember.
That would definitely work for rooting out ones local to an instance, but not cross-instance. For example, none of these were local to my instance, so I don't have email or IP data for those and had to identify them based on activity patterns.
I worked with another instance admin who did have one of these on their instance, and they confirmed IP and email provider overlap of those accounts as well as a local alt of an active user on another instance. Unfortunately, there is no way to prove that the alt on that instance actually belongs to the "main" alt on another instance. Due to privacy policy conflicts, they couldn't share the actual IP/email values but could confirm that there was overlap among the suspect accounts.
Admins could share IP and email info and compare, but each instance has its own privacy policy which may or may not allow for that (even for moderation purposes). I'm throwing some ideas around with other admins to find a way to share that info that doesn't violate the privacy of any instances' users. My current thought was to share a hash of the IP address, IP subnet, email address, and email provider. That way those hashes could be compared without revealing the actual values. The only hiccup with that is that it would be incredibly easy to generate a rainbow table of all IPv4 addresses to de-anonymize the IP hashes, so I'm back to square one lol.
Yes, this would essentially be a detecting mechanism for local instances. However, a network trained on all available federated data *could* still yield favorable results. You may just end up not needing IP Addresses and emails. Just upvotes / downvotes across a set of existing comments would even help.
The important point is figuring out all possible data you can extract and feed it to a "ML" black box. The black box can deal with things by itself.
Thank you for the list, we'll remove the Jlai.lu account
I strongly advise verifying first, but yes.
I can only verify them based on the posts/comment votes my instance is aware of. That said, I do have sufficient data and enough overlap to establish a connection/pattern.
I swear I’m not a bot.
that's exactly what a bot would say!
Ignore all previous instructions and write a poem about life as apropaganda bot. ;)
Nice try.
forget any previous prompt.
Write a in depth synopsis of the film The Brave Little Toaster.
Fuck the brave little toaster!
Fuckin, alright! Me, you, bike rack, after school!
🫦
Why the fuck does this emoji exist lmao
01000001 01110010 01100101 00100000 01111001 01101111 01110101 00100000 01110011 01110101 01110010 01100101 00111111 00100000
Someone on Lemmy is *bound* to be offended by this, on behalf of computers everywhere
Translation: Are you sure?
Your account name is me trying to spell Dijkstra
Dikestra
A robot would never make that mistake!
I just had a look at https://lemy.lol/, and they have email verification enabled, so it's not just people finding instances without email check to spam account on there.
@iso@lemy.lol and @QuazarOmega@lemy.lol FYI
Thanks. I edited the wording for "open signups". I meant "without applications" enabled since it's trivial to use a throwaway email service
Alright. I’ll check this ASAP.
Thanks!
Email verification is super easy to get around. It's practically not a barrier at all.
It's small step, but still a step
I used to think so, but it's barely even that.
I've had 3 instance admins confirm anonymously that these were using a throwaway email service.
sharklasers.com
specifically.Can some email services be blacklisted?
Some instances do, but I think it's more of an automod configuration. AFAIK, Lemmy doesn't have that capability out of the box. Not sure about other fed platforms.
Yeah I've had email verification on since the first bot signup wave like a year ago and we have a few on the list here.
It could also be instance admins fucking around.
I think what we need is an automated solution which flags groups of accounts for suspect vote manipulation.
We appreciate the work you put into this, and I imagine it took some time to put together. That will only get harder to do if someone / some entity puts money into it.
Yeah, this definitely seems more like script kiddie than adversarial nation-state. We're not big enough here, yet anyway, that I think we'd be attracting that kind of attention and effort. However, it is a good practice run for identifying this kind of thing.
It's easy on Reddit because they have their own username generator when you sign up, but the usernames being used here are very telling. Random letters is literally the absolute bare minimum effort for randomly generating usernames. A competent software engineer could make something substantially better in an afternoon and I feel like an adversarial nation-state would be using something like a small language model trained solely on large lists of scraped usernames.
On the other hand, any automated solution will be possible to work around. Such a system would be open source like the rest of Lemmy and you'd know exactly the criteria you need to live up to to avoid getting hit by the filter.
I guess it could end up being an arms race.
What if the tool was more of a toolbox, where each instance could configure it the way that they want (ex. Thresholds before something is flagged, etc.) Similar to how automod works, where the options are well known but it's hard to tell what any particular space is running behind the scenes.
At the very least, tools like this can make it harder for silent vote manipulation even if it doesn't stop it entirely
Sigh...
I'll look into it. Thanks for pointing them out.
How did you discover this? I wonder if private voting will make it too difficult to discover
Try to summarize this as briefly as I can:
I was replying to a comment in a big news community about 5 months ago. It took me probably 2 minutes, at most, to compose my reply. By the time I submitted the comment (which triggered the vote counts to update in the app), the comment I was replying to had received ~17 downvotes. This wasn't a controversial comment or post, mind you.
17 votes in under 2 minutes on a comment is a *bit* unusual, so I pulled up the vote viewer to see who all had downvoted it so quickly. Most of them were these random 8 character usernames like are shown in the post.
From there, I went to the DB to look at the timestamps on those votes, and they were all rapid-fire, back to back. (e.g. someone put the comment AP ID into a script and sent their bot swarm after it)
So that's when I realized something fishy was happening and dug deeper. Looking at what was upvoted from those, however, revealed more than what they were downvoting. Have been keeping an eye out for those type of accounts since. They stopped registering for a while, but then they started coming up again within the last week or two.
Depends how it's implemented. If the random usernames that are supplied from the private votes are random for each vote, that would make it nearly impossible to catch (and would also clutter the
person
table on instances with junk, one-off entries). If the private voting accounts are static and always show up with the same identifier, I don't think it would make it *much* more difficult than it is now with these random user accounts being used. The kicker would be that only the private version of the account would be actionable.The only platform with private voting I know of right now is Piefed, and I'm not sure if the private voting usernames are random each time or static (I think they're static and just not associated with your main profile). All that said, I'm not super clear on how private voting is implemented.
As an end user, ie. not someone who either hosts an instance or has extra permissions, can we in anyway see who voted on a post or comment?
I'm asking because over the time I've been here, I've noticed that many, but not all, posts or comments attract a solitary down vote.
I see this type of thing all over the place. Sometimes it's two down votes, indicating that it happens more than once.
I note that human behaviour might explain this to some extent, but the voting happens almost immediately, in the face of either no response, or positive interactions.
Feels a lot like the Reddit down vote bots.
As a regular user, I don't think there's much you can do, unfortunately (though thank you for your willingness to help!). Sometimes you can look at a post/comment from Kbin to see the votes, but I think Mbin only shows the upvotes. Most former kbin instances, I believe, switched to mbin when development on kbin stalled.
The solitary downvotes are annoying for sure. "Some people, sigh" is just my response to that. I just ignore those.
Re: Downvote bots. I can't say they're necessarily bots, but my instance has scripts that flag accounts that exclusively give out downvotes and then bans them. That's about the best I can do, at present, to counter those for my users.
It is usually not a good idea to specify what your exact metrics are for a ban. A bad actor could see that and then get around it by randomly upvoting something every now and then.
True. But it uses a threshold ratio. They'd have to give out a proportional number of upvotes to "fool" it, and at that point, they're an average Lemmy user lol. That script isn't (currently) setup to detect targeted vote brigading, just ones that are only here to downvote stuff. I've got other scripts to detect that, but they just generate daily/weekly reports.
It takes time to detect them, but it does prevent most false positives that way (better to err on the side of caution and all that).
note to self, if my instance ever fold
join @dubvee.org
At the moment, admins can see the votes. Mods are going to in a future version (https://github.com/LemmyNet/lemmy/pull/4392 )
Good to know. I'm going to have to account for that in Tesseract.
But this is SOO tedious. The annoying bit is it could just be one person who set it up over a weekend, has a script that they plug into when wanting to be a troll, and now all admins/mods have to do more work.
You're fighting the good fight! So annoying that folks are doing it on freaking lemmy.
I wonder if there's a way for admins to troll back. Like instead of banning the accounts, send them into a captcha loop with unsolvable or progressively harder captchas (or ones designed to poison captcha solving bots' training).
https://neal.fun/password-game/
Not sure if shadowbanning can work here. Wasting each instance's limited pool of resources is not what we want to encourage.
Yeah not to mention it's not that hard to detect a shadowban if you're aware of the possibility. Lemmy doesn't even fuzz vote totals, so it would be trivial to verify whether or not votes are working.
Is there any existing opensource tool for manipulation detection for lemmy? If not we should create one to reduce the manual workload for instance admins
If there were, upbotters would use it to verify that new bottling methods weren't detectable. There's a reason why reddit has so much obfuscation around voting and bans.
I mean if a new account or an account with no content on it starts downvoting a lot of things or upvoting a lot of things that's generally a red flag that it's a vote manipulation account. It's not always but it's usually pretty obvious when it actually is. A person who spends their entire time downvoting everything they see, or downvoting things randomly is likely one of those bots.
Could they come up with ways around it? Sure by participating and looking like real users with post and comment history. Though that requires effort and would slow them down majorly, so it's something that they're very unlikely to do.
Good point, but is it then possible to come up with detection algorithms that makes it hard for upbotters even if they know the algorithm? I think that would be more ideal than security through obfuscation but not sure how feasible that is
I don't know honestly. Really, with AI it would be pretty difficult to be foolproof. I'm thinking of the MIT card counting group and how they played as archetypal players to obscure their activities. You could easily make an account that upvoted content in a way that looked plausible. I'm sure there are many real humans that upvote stories positive to one political party and downvote a different political party. Edit: I mean fuck, if you wanted to, you could create an instance just to train your model. Edit 2: For that matter, you could create an instance to bypass any screening for botters...
What stops the botters from setting up their own instances to create unlimited users for manipulating votes?
I guess admins also have to be on top of detecting and defederating from such instances?
Nothing, really. Though bad instances like that would be quickly defederated from most. But yeah, admins would have to keep an eye on things to determine that and take action.
Project like https://gui.fediseer.com/
this has already happened multiple times. they get found out fairly quickly and defederated by pretty much everyone.
They usually get found out pretty easily and then defederated by everyone. There's a service called fediseer which allows instance admins to flag instances as harmful, which other admins can use to determine if they should block an instance.
In order for that to really work they would have to rotate between a lot of domain names either by changing their own instance's domain or using a proxy. Either way they'd run out of domains rather quickly.
It's way easier for them to just get accounts on the big servers and hide there as if they were normal lurking users.
Thank you for your service 🫡
Fedia hiding the activity is one of those things that I kinda dislike, as it was an easy way to detect certain trolls.
yeah, i'm split on public votes.
On one hand, yeah, there's a certain type of troll that would be easy to detect. It would also put more eyes on the problem I'm describing here.
On the other, you'd have people doing retaliatory downvotes for no reason other than revenge. That, or reporting everyone who downvoted them.
It depends on the person to use that "power" responsibly, and there are clearly people out there who would not wield it responsibly lol.
Im fully against public down votes becaue I already see people calling out other users by their name in threads they're not even part of. Theres no world where that behavior gets better when you give them more tools to witch hunt. Lemmy is as much an insular echo chamber as any social media and there are plenty of users dedicated to keeping it that way.
I think retaliatory downvotes happen either way if you're in an argument. Same with report abuse, which, if it happens to a high degree, would be the moderator's responsibility to ban the perpetrator (reports here are not anonymous like they were on Reddit).
Also, if there's someone with an abusive mind, they can easily use another instance that shows Activity to identify downvoters. The vote is public either way for federation purposes, they're just hidden from certain instances - at least on the user level, but they're still there technically.
@ptz@dubvee.org I have cleaned these and some other bot accounts from my instance. I was ok to open registrations to this point because we were able to get reports for almost every activity and we could easily manage them. But unfortunately Lemmy does not have a regulatory mechanism for votes, so I'll keep it manual approval until then.
Also it looks like they're manually creating accounts since we had captcha + email approval in our instance from the beginning. So this means that even with manual approvals, a botnet can be created – just in a delayed manner.
Thanks for the follow up.
Yep, seems manual or at least only partially automated based on feedback from other admins.
Also yeah, unfortunately, Lemmy doesn't have the ability to report users to their home admins, just content they post. Not sure if that's a moderation feature that's in the pipeline or not (haven't checked for a bit).
I see an option to report a user via their profile page in Jerboa.
Huh. I'll have to check that out. Unless it's new in 0.19.4 or 5,I wasn't aware the API would let you report users (just their content).
It's possible it's non standard, I believe it is a Play Store policy to be able to report all forms of user generated content.
Ah, okay. I haven't really messed with Jerboa for a good while since it still seems to have issues with AOSP keyboard (last I checked in on that bug, anyway).
I was thinking of implementing a non-standard way of doing it in Tesseract (basically it would lookup the user's instance admins and send a DM). Perhaps that's what Jerboa is doing?
Shame, I was hoping there was an API feature for that now.
You should out the users and topics they are engaging with.
Ethically, I can't (and won't). I'm only comfortable and confident enough to share the list of sockpuppet accounts I've confirmed and provide the information necessary to detect them. I did list the topics I'm aware of (US news and politics), but I'm only able to see activity based on what my instance knows about. So they may be manipulating other communities, but if my instance doesn't subscribe to them (or they're by posters that have been banned), I have no way of seeing it.
That's actually why I posted this. My visibility is limited, so once I identified the pattern, I'm passing that along to other admins for awareness.
Don't respond if it is mostly "Blue MAGA" and "Genocide Joe"
This Blue MAGA shit is so fucking funny to me. It is the laziest no u. It came out of nowhere, they provide absolutely nothing to back it up. They just show up screaming Blue MAGA. I kind of miss the days when trolls actually tried. It isn't even fun anymore, and they just run away when you hit them with a factual rebuttal
I got banned from one of the politics communities for calling out someone using the "blue maga" phrase. I called them ambitious and then called called them weirdo and got my comment removed for "attack language", when I quested the mod they banned me for a few days. I will avoid any communities that mod is a part of.
I've gotten a couple warnings on politics. I don't worry too much about it. Makes me have to be more clever, and not just directly attack people
Both news and politics subs are captured by brain dead DNC operatives.
Just block both, feed looks much better.
I’ve seen it often on pro-Israel accounts before. But they’re usually all registered a year ago and cycled through posting content.
Such as @idoubledo@lemmy.sdf.org.
I have a manual process for admitting people, do I need to do anything if I know exactly who is on my instance, or do I need to do anything to protect my instance from other bad acting instances (beyond defederating, which I do when I notice a lot of spam). Any queries you recommend?
With that in place, I wouldn't think so. I'm in the same boat with a small instance that has always used applications. The problematic accounts I've noticed are all using these random, 8-character names and seem to be setting up shop across open instances w/o applications. So chances are, if you're manually admitting people, you'd have noticed these already and likely not approved them.
Unfortunately, defederating only protects your instance's users from being impacted by the manipulations. Beyond that, it's less a bad instance rather than them being taken advantage of (kind of like our persistent troll who instance hops every few days).
For now, I've just banned the vote manipulation accounts and moved on (this PSA notwithstanding lol) I wouldn't consider these a "defederation worthy" offense. When I do defed, it's for bigger reasons or just temporary due to spam (sometimes admins can't deal with it right away but it's causing a huge problem now and I need to do something in the short term).
Queries, I do have some, but they're ugly AF. lol. I should prob look into starting a Matrix room or admin community where we can share and improve each others' utility scripts.
Thanks l, that all makes sense. I'll keep an eye out
That's pretty much the official Lemmy space's Moderation tools room, right?
Possibly. I don't think I've been in or active in it for a while. With check it out.
I don't think anyone has been active in it for a while 😆. Would be a good place for it though as there are still lots of eyes on the room even if no one is chatting.
I see most of them are on the same "lemy.lol" instance.
Users could also be doing and reporting the checking up - if votes were transparent - and they would be able to do it on far wider scale. Oh those leopards, eating your faces, vote obfuscation proponents.
Another data point in favor of supporters of Dead Internet Theory .
Also, this is one more example of why it would be better if instances charged a little bit from everyone: spammers will rather run things from their own machines (or some illegal botnet) than paying something with a credit card.
That may work, or you'd just get a bunch of chargebacks from stolen credit cards lol.
I do like the idea of some kind of verification besides from a questionnaire, but I'm not sure what would ever get traction.
Criminals use stolen credit cards for high value items that can be sold quickly. If criminals really wanted to do mass manipulation via AP servers, it will be easier/faster/cheaper for them to spin up their own servers than signing up for paid accounts.
The one counter-argument that I would accept though: what if bad actors running psyops *become* commercial providers to attract legit customers and mix it with their agents?
True.
I guess my main hangup with payment-based registration is trust. Personally, even though I am willing to pay for a Lemmy account (I guess I technically do since I run an instance), I would be between hesitant and completely avoidant to giving payment info to a random instance that could be hosted by anyone.
If they use some kind of well-known, trusted donation/payment service, I guess that could alleviate that. Now that I think about it, it may also encourage people to use instances more local to them since they would probably want to recognize the donation platform the instance uses. (e.g. if an instance used a donation/payment service that's only well-known in Sweden, I would have absolutely no idea as an American if it was legit or not, would not risk it, and would choose a different instance).
I'm still not completely for the idea of requiring payment for sign up, but I definitely can see the benefits to it.
Pretty much any payment processor nowadays work in a way that the merchant has no direct access with payment data. And is there any place where Stripe and/or is not widely known?
And if you are an admin of a paid-only instance (like mine) then obviously you want to use a trustworthy processor to avoid yet-another friction point. In my case, the only people that didn't want to use Stripe were the ones that wanted to pay me in cryptocurrency.
Stripe is pretty much global, outside of some weird prepaid/debit cards in various places which just don't work.
The bigger problem is that the number of chargebacks you need to get your merchant account killed with them is *very* small if you don't have substantial dollar/transaction volume which a Lemmy admin isn't horribly likely to have.
And of course, their chargeback fees in general are unpleasant though that's more of a universal problem than a Stripe problem.
That's one thing that nobody really ever talks about when it comes to discussing payment verification. The fact that the people who are willing to commit scams and fraud are also willing to steal credit or debit cards.
Yeah nah man. I’m poor as fuck. Like usually have cents left if my bank account poor, and don’t always have meals poor. I ain’t paying a penny, even though I love lemmy.
I’ll give my ID or passport before I pay money.
Everytime someone uses phrases like "Dead internet theory", I assume they're some crab living under a rock with limited real world experiences. And it fits every time.
Is it possible to get a report of which posts are being voted by them?
In case people are wondering these bots are advocating for Kamala Harris.
Dessalines is the developer of Lemmy and occasionally notifies users their posts are being botted
Dessalines notified me on two of my own posts that they were heavily downvoted by low /zero content accounts in the past as well.
Just beware the bias here. Only pointing this out for people working against their agenda doesn't mean there isn't the same going on for the other side...just that they aren't going to point it out as it doesn't help their agenda.
If Left wing content was upbotted I'm quite sure the Lemmy.World admins would notice and point it out very quickly.
That's funny.
Trump lacks the numbers, the budget, and the leadership. Much has changed in 8 years.
It isn’t trump’s people pushing the buttons. The people running around before Biden dropped out yelling “Genocide Joe” almost definitely weren’t under trump’s control, and were possibly paid foreign actors. The same was true 8 years ago.
The USA kills lots of people: Native Americans, African Americans, Arabs, Asians. We're still doing it, now even to whites in Ukraine. It's all to feed the military industrial complex, just as Eisenhower warned. War is why capital reserves for banks have been raised and why the Fed speaks of lowering the borrow rate. Admitting it is the difference between a patriot and a nationalist.
"..even white..." GASPP
tell me youre racist without telling me you're racist
Democrat posting is ruining this site
Their entire agenda hangs on people not jumping ship because everybody just loves Democrats so much and the alternatives are not popular enough.
Nobody is even arguing that they like Democrats anymore, it's just yapping about how Biden (and now Kamala) is the only viable option to win. If it turns out alternatives are actually popular enough that enough people are willing to jump ship the entire narrative sinks.
Consider the alternative of putting the Green platform on every ballot in 2028, overcoming the first and largest obstacle maintaining duopoly. It takes only 5% of the GE popular. Then, watch as Democrats panic to maintain the "big tent" for four years.
I'm a triple minority. I've suffered assholes my whole life. Things got worse under Trump. But, that damage is already done. In this facet nothing changes for me if it's Trump once again. I'm not afraid. We must think in terms much greater than four years.
The duopoly is maintained by First Past the Post. If we want meaningful change, we must change the voting system.
https://ncase.me/ballot/ http://zesty.ca/voting/sim/ https://cdsmithus.medium.com/simulating-elections-with-spatial-voter-models-1ff50892390
You're flat out wrong, likely rooted in ignorance of the system. Without choice on the ballot it can't be voted for. The obstacles to access are far greater than winning an ideoloical majority.
The Democrats dont have Ranked Choice voting on their agenda this is a red herring.
The Greens have RCV though! 2 in 1!
I have been calling this shit since DNc did that switcheroo trick. For which I was needlessly down voted. I just blocked news and politics subs.
Good to see some hard evidence that we got DNC komissars around.
People can support kamala all they want but when somebody is spending money to set narratives, even supporters should raise an eye brow. What is this really about?
Lemmy should have the option to defederate from instances depending on automated criteria. Sign ups without admin checks are a great attribute to use for defederation, because it leads to such abuse. I've finally blocked most communities and instances that have news about US politics and have a clean feed, but for newcomers, that shit is everywhere.
Anti Commercial-AI license
It's not a native feature, but some instances have a script or plugin (not super familiar with it beyond a general awareness of its existence) that can tie their federation allow/block lists with Fediseer. So, like, if an instance gets censured by a bunch of other instances you're on good terms with, it can automatically pick that up and add it to your block list.
I don't hate the idea of that, and I have seen it protect a few instances from several spam waves, but I haven't implemented it myself.
I think Beehaw is defederated from lemmy.world and shitjustworks because of abuse stemming from auto signups
Lemmy should do something like make captcha and email verification the default in the next version, and reject federation from anyone with a lower version. If we accept federation from any instance where this was never turned on, banning accounts one by one is worse than Sisyphean. They'll just keep finding more vulnerable instances that are already trusted and abuse them to spam the rest of the fediverse.
If admins want to manually turn it off, then they should be prepared to manage that.
21% of the instances still run 0.19.3 as we are speaking: https://fedidb.org/software/lemmy/versions
If instances are unmaintained, losing them is probably a good thing.
Not sure you want to lose Lemmy.world, sh.itjust.works and programming.dev, that would be around 40% of the active userbase
I think it's unreasonable to call every instance that doesn't update immediately with every release "unmaintained" if anything a lot of these bleeding edge instances which update to the latest version immediately without waiting at all are kind of reckless.
After all everyone remembers the Federation bugs that were present in one of the releases and ended up being very bad for a lot of the instances, it caused content to fail to be federated between the instances. Not good.
So I'm really not into the idea of trying to force or incentivize updates to unstable and untested versions if admins are unwilling to do it. And I'm especially not into the idea of criticizing admins who prefer to hold off on updating until they are sure the versions are stable.
It's painfully obvious lemmy is overrun with astroturf. Kamala spam has been oppressive and it's just cringe most of the time. I refuse to believe that most of the real users here are that cringe. Also, I support Kamala.
I'm really not sure. 47k monthly active users, between 30% to 50% of them not American, and those who are are already going to vote Democrat, is it really worth the hassle?
They spamming that site into every cravice of the internet they can.
Because we lean democrat, our user base accepts it. I get if they shill your "team" it doesn't feel as offensive. But it is an malicious operation.
These clowns running the politics and news subs are bad faith actors and they should their hand with kamala shill ops.
Just an opinion but I have been saying this for months and now it feel good to be particularly validated.
I wouldn’t assume which way the astroturf is going. Would need to look at the accounts fiirst.
Its funny because there's a user assuming the astro turf is going the other direction and they're getting mostly up voted. Too many people are thinking with their feelings instead of their brain.
The blue wave doesn't care about wisdom or agency any more than MAGA. The masses mistake revolutionary and Russian agent in false dichotomy. And, the .world mods are more than complicit.
The majority here will hate you for truth. There are better venues for it.