This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page.
Progress update: - At this point in time we have some consensus to have links to published Wikinews articles from the relevant dates at the bottom of each of the seven Day-templates at Portal:Current events. (Notice how currently there are links to published Wikinews articles at the bottom.) This is not necessarily related to the Wikinews Main Page lead templates, just those Wikinews articles that are published and are of those particular dates. Could you help with arranging this? Thank you so much, Cirt (talk) 21:27, 1 September 2008 (UTC)[reply]
Sorry. I seem to have missed your second comment among others. Do I undestand correctly that you want each page such as Portal:Current events/2008 August 30 to contain an additional list of auto-generated links to articles published on that specific day? Is it not doable under the current framework? (I don't exactly know how <DynamicPageList> works and what does it allow.) Also, any specific category from which these articles should be picked? And how many of them? (If my questions do not seem to make things clear, please provide a quick example of how should the page look like.) Миша1320:46, 3 September 2008 (UTC)[reply]
Yes, that is exactly what I mean. I don't know if categories could be set up at Wikinews with <DynamicPageList> for something like Today, Today-1, and through Today-6, but then again I am not the most expert with <DynamicPageList>. In any event for something this complex it might be good to have a bot do it directly - but are you familiar with <DynamicPageList>? If not I will try to ask around about that part of it at Wikinews and might be able to arrange this on my own - oh but wait, the day-numeric would change for each day, so what was Today-6 would the next day become Today-5, so yea this might be more complex than using <DynamicPageList>. Cirt (talk) 22:32, 3 September 2008 (UTC)[reply]
I'm not really familiar with the details of DPL's workings, but from the description it seems to be very flexible. I'll poke around a bit, see in detail how Wikinews utilizes it and think of how to force it to work in our favor. Also, there still remains the question of which articles to to pull - all published on the specific day?/some specific category? Finally, how many of these? All? 10/15/20 latest?
As a post scriptum, I am going on long-overdue vacation starting this Saturday and might have limited or no Internet access for a week or so - thus, progress may halt for a moment on my part. Миша1317:28, 4 September 2008 (UTC)[reply]
I would say start with a wide margin, so use all published on a specific day. No worries if you can't get to it for a bit, like I said I will try to see about figuring out more about the DPL stuff on my own, but because of the complexities described above it would be best to have your help. Cirt (talk) 19:13, 4 September 2008 (UTC)[reply]
Okay I think I have almost got this figured out on my own. Please see the following subpages:
I have fixed the source page parameters (you forgot the Wikinews: namespace prefix) and allowed the bot to also edit pages that contain "/Wikinews/" in title - it seems to be working well now. Миша1322:32, 4 September 2008 (UTC)[reply]
Well, as you know your bot delivers the WP:PW newsletter (And everyone thanks you) but, as you go to my talk page I always get 2 of the same newsletter, but when I go on other User talks that get the newsletter, they have one. Is their any way to fix this ? RkOrToN00:36, 1 September 2008 (UTC)[reply]
Recently, we've cut down out members list, removing all inactive members. The problem is that your bot merges the members list with the category of members. Can you think of a solution for this? -- iMatthewT.C.13:29, 1 September 2008 (UTC)[reply]
I could stop merging them. :) It would be trickier for the bot to track changes to the category and only add new members to the list, but I could eventually do it, if you wish. Миша1317:18, 4 September 2008 (UTC)[reply]
Hello Misza13/Archives/2008. You have been invited to join WikiProject Software, a WikiProject dedicated to improving the Software-related articles on Wikipedia. You received this invitation due to your interest in, or edits relating to or within the scope of the project. If you would like to join or just help out a bit, please visit the project page, and add your name to the list of project members. You may also wish to add {{User WikiProject Software}} to your userpage and {{Wikipedia:WikiProject Software/Announcement-u}}to the top of your talk pagewith the heading ==WikiProject Software Announcement==. If you know someone who might be interested, please pass this message onto others by pasting this code into their talk page {{Software invite|~~~~}} with the following heading == WikiProject Software ==.
It is a criminal war of aggression against innocent civilians; by parroting what the American media says about this "war", Wikipedia is effectively furthering the aims of the corporations behind the war, like Halliburton, Blackwater, etc.
If you want to do the right thing, you should report the truth behind this issue, and adopt a firm stance against this horrible atrocity. Wikipedia's information on the Holocaust is totally truthful and accurate; should we not do the same for Arabs and Afghans because they are not white? —Preceding unsigned comment added by Geriatric Erotomania (talk • contribs) 21:52, 5 September 2008 (UTC)[reply]
Archiving troubles
Hi. I realize you're on vacation, so don't worry about this, it isn't urgent at all. However, if you could do me a favor when you return: I tried to set up my userpage to archive with MiszaII, but so far even though I've double-checked the template, it isn't working. I have the following code on my userpage: {{User:MiszaBot/config
|algo = old(14d)
|archive = User talk:Borg_Sphere/Archive %(counter)d
|counter = 1
|maxarchivesize = 150K
}}
Can you please tell me what I'm doing wrong? Thanks, and sorry if I'm doing something stupid that I just missed reading the FAQs. Joe (Talk) 20:19, 10 September 2008 (UTC)[reply]
Signpost updated for August 25 and September 8, 2008.
I am currently using your status script (I love it!) But I was wondering if there could be a way to set a time period that would always be set for "out" automatically. This would be very useful for school or work, I think!
Not in a straightforward way, because any change requires that your browser actively edits the status page, so it has to be on. One possible alternative way be to use conditional m:ParserFunctions but I'd have to think more about it first. Миша1314:42, 14 September 2008 (UTC)[reply]
Archiving troubles, part 2
Please, program your Bot to archive things when it's really needed.
I believe that's exactly what the bot does, where you can define "really needed" by specifying the period after which threads are considered "stale". Миша1314:43, 14 September 2008 (UTC)[reply]
Talk:The Lion King archival
I undid an automated archival of several threads from Talk:The Lion King, that MiszaBot I had moved to Talk:The Lion King/Archive 1 which has all threads from November 2006 and older. There is already a Talk:The Lion King/Archive 2 which has threads going nearly to the end of 2007, so any recent threads being archived should go there (or perhaps an Archive 3 created and the threads moved there); it should not in any case be archiving any discussion threads to Archive 1 at this point. --Mwalimu59 (talk) 07:54, 14 September 2008 (UTC)[reply]
Your Bot Brought My Bridge to Nowhere (Literally)!
LOL. If I'm not horribly confused, your bot, in archiving the Palin talk page, took my section that I had just edited --"Bridge to Nowhere Redux" -- and destroyed it in a puff of bits, brought my bridge to nowhere, so to speak :-) Not sure why it happened, but I can't find it on current page or either of the two older archives. So I'll restore it.GreekParadise (talk) 03:16, 15 September 2008 (UTC)[reply]
Gra wp reverts
Just dropping by and making sure you are here, and not running an adminbot, since you keep beating me to reverting grawp. :P How are you doing that so fast? Prodegotalk20:34, 16 September 2008 (UTC)[reply]
Based on your lack of response I believe, though I hate to assume, that you are running an unauthorized bot. I just wanted to point you to WP:BOT and note that you should request approval before running any unattended bots. I also ask you discontinue using any bots until they are approved. Thanks, Prodegotalk21:00, 16 September 2008 (UTC)[reply]
Administrators are given access to tools so that they can better protect the project from harm. What harm is being done? What harm is your threat preventing? If you're really that upset Misza is beating you to reverts you yourself agree need to be made, maybe you should take up a hobby -- I hear knitting can be very soothing. For the record, I will insist that any sanction against Misza regarding these allegations is first discussed and supported at WP:AN/I, given the numerous prior discussions which have opposed any such thing. – Luna Santin (talk)21:29, 16 September 2008 (UTC)[reply]
Ah, no I think you misunderstand. I noticed because of Misza's blocking speed. As you know, grawp is a high speed vandal, and Misza seemed to be using a script (which is initiated and reviewed by a human, unlike a bot) that was faster than anything I had seen. So I was interested in anything that could help deal with this vandalism faster. Unfortunately I found that this was not in fact a tool that is permitted, unauthorized bots are forbidden by the bot policy, and should be blocked according to the blocking policy. If Misza does not stop using the bot, as required by policy, I will have to block him, as required by policy. But I doubt that will be necessary, surely Misza simply didn't understand. Established consensus and policy clearly do not allow unapproved bots. You are free to bring this up on WP:ANI yourself, but I do not believe it is necessary in such a clear violation of policy, or in a situation which I am sure can easily be resolved simply by Misza getting approval for his bot. Happy editing! Prodegotalk21:34, 16 September 2008 (UTC)[reply]
On the contrary, I think you misunderstand. This isn't a game. Can you demonstrate any tangible way in which Misza has damaged this project, or in which Misza's actions currently pose an imminent danger to the encyclopedia? It seems to me you haven't even taken a moment to research past discussions regarding this very topic, with this very user, nor to resolve this without immediately reaching for your block button. For shame. – Luna Santin (talk)21:52, 16 September 2008 (UTC)[reply]
I asked you some questions. Given your abject refusal to answer them, I can only assume you agree Misza's actions pose no danger to the project. – Luna Santin (talk)21:55, 16 September 2008 (UTC)[reply]
Does it matter? Atlanta, Georgia has a law against pants being too low. It doesn't hurt anyone or anything, but you still have to follow it. --Skunkboy74 (talk) 22:00, 16 September 2008 (UTC)[reply]
Similarly, in Gainesville, Georgia, a city ordinance prohibits eating chicken with a fork. It's a law, but you still don't have to follow it. --slakr\ talk /22:08, 16 September 2008 (UTC)[reply]
I should very much hope it matters. Any block which cannot be explained as somehow preventing damage to the project is by definition a bad block -- this is, after all, one of the philosophical cornerstones of our blocking policy. Note that your argument, while a nice soundbite about the nature of authoritah, provides no reasonable evidence that any such damage has occurred. – Luna Santin (talk)22:08, 16 September 2008 (UTC)[reply]
In the Real World, rule of law is generally crucial because there are many actions which cannot be undone; on a wiki, where the vast majority of actions can be undone with both ease and speed, that's less of a concern. Rules are not an end unto themselves, on this project -- that is another philosophical cornerstone of this wiki, like it or not. You ask how I can defend this? Simple enough: no one has yet demonstrated any harm being done, here, but many users, even including the administrator who's prematurely threatened to block Misza, apparently believe his actions are helpful. The question remains: what damage is being done to the project? – Luna Santin (talk)22:21, 16 September 2008 (UTC)[reply]
Agreed. More importantly, given the scientifically-measurable amount of damage that one could statistically gather, does that damage outweigh the similarly measurable benefits? Clearly the number of automated fuck-ups is statistically insignificant— if it even exists in recent history— in comparison to the automated one-ups that said automation has achieved. Simply put, if this bot is only causing good, one can only remove that good by blocking it. In effect, one is indirectly creating an imbalance; for, the amount of bad (vandalism) would inherently increase by negating the measurable good (this bot) in favor of some hypothetical-but-immeasurable good (strict adherence to policy). --slakr\ talk /22:27, 16 September 2008 (UTC)[reply]
I agree policy is fluid, but it is all we have got. Please, I urge you, if there is consensus a to allow bots that do not require approval through the BAG process, the policy should be changed. All one must do to change policy is to obtain consensus, and change the policy. Right now the blocking policy tells me explicitly that I should block "bots operating without approval", and indeed this happens all the time to non-admin editors. Admins and unapproved admin bots are held to the same standard as a non-admin or his/her unapproved bot. But again, this is all speculative, I am sure that Misza will simply request approval through the process that current consensus has created, and there will be no additional problems. Prodegotalk23:35, 16 September 2008 (UTC)[reply]
I will not request any approval simply because a) I don't play process for the sake of it or to the process wonks' (such as yourself) satisfaction b) the bot already is approved, authorized or whatever you call it and operaties within policy. If that policy is IAR (which is the default if nothing else can be applied), it doesn't matter - the admin bot section of WP:BOT does not have community consensus and remains tagged as proposed until few hours ago was only proposed (still, it's disputed and doesn't apply retroactively anyway, so this is moot).
Furthermore, I am surprised these questions come from you, who has a longer tenure as an administrator - the bot has been operating for nearly two years now and everyone and their grandma is aware of its existence.
Finally, if you still perceive that the blocking policy "tells" you something you cannot resist despite no evidence of damage being done, I must suggest (per Luna above) switching to knitting every now and then.
Please allow users to archive to subpages of their userpage
I think that your bot should allow a user to archive his/her talk page to subpages of the userpage, not only to subpages of their user talk page. Doing this would seem to be in line with the way several users handle their archives, and a user should be allowed to do it without getting the key from you. עוד מישהוOd Mishehu12:24, 17 September 2008 (UTC)[reply]
Hi, I've just reverted Penn Mutual back to a non-copyvio version, but the above image which is used in this version had been deleted as orphaned (correctly, as the copyvio version has been there for nearly two months). Is there any chance you can restore it? regards , ascidian |talk-to-me11:38, 20 September 2008 (UTC)[reply]
Hi, there is
a particular thread that your wondrous bot is leaving open, and there's been a little speculation as to why, with one theory being that some signatures there are overly ornate. As I probably have the most frequent and most pretentious signature in that thread, I'd be interested to know why it has not been archived; hopefully it is something mundane like the number of blank lines or someone adding a phrase to prevent the thread being archived prematurely - but I'd appreciate you having a look if you have a spare minute. Ta muchly. ϢereSpielChequers07:09, 19 September 2008 (UTC)[reply]
I don't want to be this confrontational. I really would have preferred a BRFA in the open like every other bot.
Your username-blocking bot seems to take input on who to block from IRC. Who is on the channel that it takes input from? What kind of information do they see about the user, and how do they determine whether to block? That IRC channel worries me because it sounds like an alternate version of UAA, except at least on UAA we can check the users' first few contributions to make sure we're not getting a false positive. Then again, perhaps I'm misunderstanding what the IRC connection is for.
I haven't seen anything in your image-deleting bot that makes sure that the tag is correct. (In at least one case -- "no rationale" -- this is hardly unexpected. Since by consensus we do not require fair use images to use particular templates besides the ones in the copyright tag dropdown box, determining whether there is an adequate rationale requires natural-language understanding. I'm opposed to having a bot handle that task at all.)
If an image is tagged as orphaned, does the bot check to make sure that it is orphaned? If it's tagged as "no copyright information", does the bot make sure of that? You also mentioned the problem of images that are inadvertently orphaned by vandalism, saying other admins would delete them too; but other admins could use their common sense to check the page the image is supposed to be on and see if it's vandalized. Could you at least make your bot do something similar, such as following a rationale link if it can find one, and seeing if that article has recently had the image removed from it?
First of all, there are two IRC connections. One to the recent changes feed (no user can input anything to the bot there) and the other to freenode. The freenode part has two purposes:
Reporting - the bot alerts everyone in the channel(s) to certain suspisious behaviour as well as announces blocks it places (so any admin could rectify it immediately should they notice it's a mistake). The bot does provide a link to a user's contributions in its report. In addition, our main CVN reporting bot is chatting in the background, so everyone has the big picture.
Remote control - the bot can be configured on run-time by issuing certain commands to it, including the modification of regex blacklists (the ones that trigger automatic blocking) as well as manually issuing blocks. This is all based on user access level (like the old IRC Services) but right now it's pretty much theory because I'm the only one with access level that grants anything beyond the status command.
For the image bot, the "tag is correct" assumption is drawn from the presense of the image in the category (it draws the images from that given category). In theory, a devious individual could just place an image in that category, but then:
If the image had no tag and so qualified for I4 - if it gets deleted before 7 days then it's not a big deal (it was orphaned anyway).
If the image had a free-use tag on it (PD, CC, GFDL, etc.), it is skipped (though in many cases these tags are placed illegitimately).
The bot doesn't check for a written rationale - I don't think it's doable by a bot (although I have noticed there's a certain threshold of image page length (in source code characters) below which all images could safely be deleted as not having a rationale, but that's another story). But the logic goes deeper - again an assumption is drawn that, if the image if in a "missing rationale" category, then it is fair use in the first place, as such qualifying under I5 if orphaned (free-use tag exclusion applies). Though maybe the summary used in deletion is misleading - I was using an I5 reason at first until I was pointed out that in some cases the images were orphaned for a shorter period than 7 days.
As of orphan status, of course it is checked, in all cases (LinkCount = len([x for x in ImagePage.usingPages() if x.namespace()==0]), then if LinkCount > 0:) - no image used in an article is ever deleted. That's one of the most important assumptions I have in mind when simplifying an algorithm - if the image was orphaned and fair use (these two conditions are assured strongly) then it's no big deal if, from time to time, it gets deleted earlier than supposed or with a wrong edit summary - these images are not a great loss to a supposedly-free encyclopedia and besides, I always restore the images if asked to.
Finally the "admin would check" argument. Oh, would he? Ask on AN, you might get some honest answers. You can't possibly expect those who used to delete hundreds of image each day checked anything other than the image description page and (in rare cases) the image history. Non-free images are butchered mercilessly on the premise that 1) they are not worth that much trouble and 2) it's easy to fix mistakes (image undeletion was a relatively new feature).
And so we come to the hardest part (for me at least). One could make a (valid) argument that a bot, being many times faster than a human and tireless, should always take the extra effort and check:
When exactly was the deletion tag (no source/orphaned/etc.) added, and by whom?
If possible to check (a backlink in image description page), who removed the image from the article, and when?
And indeed, I wrapped my brain around this for a while in the early months when the first few "article vandalized" complaints came in (there don't seem to be any devious tag frauds). But then I realized it's not as easy as it sounds (this would require parsing the history revision by revision, a non trivial task, though certainly doable, plus the image might've been removed from several articles, etc.). In short, I realized that 1) the logic becomes too complicated, and complicated systems fail more often and 2) it's more trouble than it's worth (what did I say about orphaned fair use images and a free encyclopedia?), especially since the complaints were rare too, so I decided I can handle them without problems and will settle for simplified safeguards such as the free-use tag exclusion and a list of "trusted" users and bots who can tag images - the bot now picks a contiguous block of edits made by "trusted" users from the top of the image's history (this is supposedly the tagger and any bots that subsequently edit the page) and checks the date of the oldest of those edits. If it's more than 7 days (and again, the image is orphaned and not tagged with a free license, in addition to belonging to a deletion category) then *poof*, with a summary related to the deletion category the image comes from. Миша1309:42, 19 September 2008 (UTC)[reply]
Thank you for the explanation. In your image bot, you've thought of and addressed many of the cases I was concerned about, and I now have more confidence in your image deletion actions. I was indeed misunderstanding how the block bot used IRC, so that's good as well.
I wish you wouldn't dismiss me and so many others as "process wonks". The last time I saw someone coming up with new bot tasks without approval, it was Betacommand and his bad-idea bot brigade. Knowing how your bots work has convinced me that you are far more responsible.
See that? I'm an editor who is very concerned about automatic blocks and image deletionism, but now that I've read your description I trust that you're generally doing it right. I don't think it would be that painful for you to say basically the same thing on BRFA. rspeer / ɹəədsɹ 07:31, 21 September 2008 (UTC)[reply]
I would second this. As a user who also has concerns about certain bots operated by certain other editors, I'm happy to see responsible bot owners who think through their actions and take criticism/suggestions on board. Orderinchaos10:20, 23 September 2008 (UTC)[reply]
You still around?
I haven't had any time to edit lately, I see you have.
Oh, I guess I didn't request it because I'm from the old ages where we didn't tag all bots like we do today, especially the slow-editing ones (unlike ClueBot) that do not-so-minor edits, so that people (who usually have bots filtered out) can benefit from the knowledge that a certain page has been edited - like, news updated (also, it seems to me that filtering bot edits still hides a page completely from the watchlist instead of showing the last non-bot edit). Lastly, you can just unwatch these pages since they're only edited by the bot. Cheers, Миша1320:09, 21 September 2008 (UTC)[reply]
Mutual desysop idea you floated
So, Special:Desysop, you tag me, or I tag you, we both instantly lose tools until a 'crat gives them back? Then the AC and community decide who was right, if it was warranted, and any abusive use of this is basically the termination of your administrative career for life (to be honest, thats what it would be). I don't think it would be an easy come, easy go, thing either. rootology (C)(T) 17:17, 21 September 2008 (UTC)[reply]
That's basically it, although I'm not exactly clueful as to how this should be mopped up. Back to RfA? A dedicated RfA-like but simplified forum? The AC exlusively to decide? Or perhaps the "tagger" decides on return options? Also, this was originally thought a faster way to desysop a compromised account (stewards happened to be hard to get a hold of), now we're talking with an added spice of possibly using it as a tool in an administrative dispute (such as a wheel war). This is an easy go, because it could potentially come from any hot-tempered admin who disagrees with a decision of yours, but is far from an easy come, which is why I have my mind wrapped around the concept of a simpler "comeback" process. Миша1318:19, 21 September 2008 (UTC)[reply]
I would guess any way BUT the AC deciding would be impossible. And not an RFA... presumably if someone pulled the trigger on this, it's got to be for a great "Wikipedia in time of need" kind of thing. If I do it because you annoyed me that desysop of you would probably be my last admin action ever, and your last one till someone turns it back on again for you shortly. If the emergency desysopping "sticks" per the AC, I'd guess that's your forced retirement under a cloud. You can always try to RFA back later, then... rootology (C)(T) 04:23, 22 September 2008 (UTC)[reply]
Neat. Why couldn't it, however, apply to 'crats, like you mentioned there? Just hypothetically, I mean. A 'crat taking another 'crats bit would in turn lose their 'crat bit, right? rootology (C)(T) 13:45, 23 September 2008 (UTC)[reply]
Yeah, a 'crat would be a problem given that he becomes sort of a self-reloading gun with that extension, but on the other hand, you'd think we apply pretty high standards to those people, no? Still, how about making it more flexible - you chose which bits you want to "annihilate" against the target? This way, for example, a 'crat can hit another one and only reduce them to sysop status. But on the other hand, on a small wiki with only two crats, one could first annihilate their sysop flags, resysop oneself and then annihilate crats and remain the only sysop. Has its quirks one way or the other. Btw, Special:AnnihilateRights sounds sooo much better as a name for this. :)Миша1319:46, 23 September 2008 (UTC)[reply]
A bigger concern would be that if a crat's account were compromised by a technically capable individual, they could script themselves up an army of kamikaze admin socks to knock all other admins (and potentially 'crats) out of the system. Hence couping the system until a steward could be summoned. Burzmali (talk) 15:57, 24 September 2008 (UTC)[reply]
If you were responding to me (the line directly above your comment....), I was saying that I've made it so it can be configured so that's not an issue... SQLQuery me!18:19, 24 September 2008 (UTC)[reply]
Additionally, A compromised / rogue / etc crat could already do FAR worse things. There's a reason we restrict some REALLY dangerous functions to them. SQLQuery me!18:44, 24 September 2008 (UTC)[reply]
I'm wondering, what's worst than having free reign over a wiki? Using scripts, and a heaping helping of proxies, a 'crat account with this feature could control wikipedia for quite a while. As it stands, a compromised 'crat account could be stopped by modified anti-grawp measures, but not if they can remove the opposition. Right now, you would have to compromise a Steward account to do that kind of damage.
That said, I like the idea behind the concept, maybe we should just politely let the 'crats know that their accounts are a bit more powerful than before and that changing their password a bit more frequently if this goes into effect. Burzmali (talk) 19:09, 24 September 2008 (UTC)[reply]
Depends, I'm not sure anyone's seriously suggesting that this gets installed just yet. I only wrote up the extension since it seemed like (and was) entertaining to put together, and was an amusing concept to explore. Additionally, I really really hope the bcrats either have really, really strong passwords, and change them pseudo-frequently already :) SQLQuery me!00:19, 25 September 2008 (UTC)[reply]
Sorry to interject, but I remember seeing this during the original discussion. I didn't comment, but I recall thinking, great idea, but it would make it possible for a b'crat to coup ... Burzmali (talk) 19:18, 22 September 2008 (UTC)[reply]
My talk archive
Don't know what happened, but Miszabot III only completed half of the archive process on my talk page just now. It copied the threads to the archive page, but didn't remove them from the talk page. Just thought you should know. Thanks. --Kbdank7115:03, 24 September 2008 (UTC)[reply]
Your request to be unblocked has been granted for the following reason(s):
This is not a decision that should be made by a single administrator. There are a lot of valid arguments why Misza13's bot(s) are not only not harmful, but positively helpful, to the project. Please take it up at RFC or somewhere like that, Prodego.
Just a few days ago, there was some discussion about the bots, but the blocking admin did not attempt to take part in it or otherwise warn of an impending block.
WP:BOT has been subject to lots of changes in the past few days, but one version that I had explicitly come across the other day (this one), stated that admins currently running bots under their account would have ample time to file a brfa should the measure pass. That clause of the policy had been added here almost a month ago, while that clause has been removed only within the last few days. A block based on that short term removal seems a little unjust.
The bots have not been demonstrated to:
have harmed, rather than benefited, the encyclopedia, nor
have performed a significant number of harmful actions that could be statistically comparable to the number of beneficial actions.
The bots have been demonstrated to:
automatically block clearly disruptive usernames without significant problems,
automatically block automated pagemove vandals without significant problems, and
done whatever else they do without significant problems.
This user has no serious outstanding conduct/editing issues, has demonstrated a ready willingness to respond to inquiry, and does not contribute negatively or in any other way in serious contradiction of our policies and guidelines. This user's non-bot-related contributions generally result in positive gain by the encyclopedia.
I believe that this user's actions are clearly covered by our longstanding ignore all rules policy, as:
the rule prohibiting the user from contributing to the encyclopedia what appear to be net positives would be the only thing preventing that user from contributing those positive things, and
that rule's enforcement, through blocking an otherwise constructive contributor, creates a net negative impact on the encyclopedia.
I was just looking at this archive and noticed that threads 47 and 48 appear to be repeated (as threads 49 and 50). Doesn't seem like a big deal but thought you might like to know. Regards, Guest9999 (talk) 01:37, 30 September 2008 (UTC)[reply]
That's a known and rare issue. I must've happened that the bot managed to save the archives with the new threads but failed to save the main AN page without these (edit conflict, server error etc.). Sadly, there's not much I can do about it. Regards, Миша1322:18, 30 September 2008 (UTC)[reply]