The recent, latest online activism against an online idiot encouraged me to write something which I had been thinking about for awhile.

The philosophical musing began when I discovered the following on Wikipedia:

Eternal September

From Wikipedia, the free encyclopedia
(Redirected from Long September)
Jump to: navigation, search

Eternal September (also September that never ended)[1] is the period beginning September 1993,[2] a date from which it is believed by some that an endless influx of new users (newbies) has degraded standards of discourse and behavior on Usenet and the wider Internet.

The term eternal September is a Usenet slang expression, and was coined by Dave Fischer. The term is so well entrenched that one news server calls itself Eternal September, and gives the date as a running tally of days since September of 1993 (e.g., Sep. 03, 2012 is “September 6943, 1993, the September that never ends.”).[3] This server was formerly named Motzarella.org.[4]

[edit] Background

Usenet originated among American universities, where every year in September, a large number of new university freshmen acquired access to Usenet for the first time, and took some time to acclimate to the network’s standards of conduct and “netiquette“. After a month or so, these new users would theoretically learn to comport themselves according to its conventions, or simply tire of using the service. September thus heralded the peak influx of disruptive newcomers to the network.[1]

Around 1993, the online services such as America Online, CompuServe and Demon Internet began offering Usenet access to its tens of thousands, and later millions, of users. To many “old-timers”, these newcomers were far less prepared to learn netiquette than university students. This was in part because the new services made little effort to educate their users about Usenet customs, or to explain to them that these new-found forums were outside their service provider’s walled garden, but it was also a result of the much larger scale of growth. Whereas the regular September freshman influx would quickly settle down, the sheer number of new users now threatened to overwhelm the existing Usenet culture’s capacity to inculcate its social norms.[5]

Since that time, the dramatic rise in the popularity of the Internet has brought a constant stream of new users. Thus, from the point of view of the pre-1993 Usenet user, the regular “September” influx of new users never ended. The term was used by Dave Fischer in a January 26, 1994, post to alt.folklore.computers, “It’s moot now. September 1993 will go down in net.history as the September that never ended.”[6]

Some ISPs have eliminated binary groups (Telus in Canada)[7] and others have dropped Usenet altogether (Comcast,[8] AT&T[9], AOL[10][11]). This led some commentators to claim that perhaps September is finally over.[12][13]

——

I was a university student who used the Internet before AOL, Compuserve, and the World Wide Web caused the beginning of the “Eternal September”.  I had to learn netiquette.  Even when AOL and other online services began to link to the Internet, the users were still paying to use those services, and could be identified, even if they used a screenname (usually required, because of a limit on length) or an account number (CompuServe).

Now?  Anyone can go online, create an pseudonymous email account, and post away.  If one account is blocked, another can be created.

So, how do you make the Internet a better place for polite discourse?  You probably can’t.  But here are some possibilities:

1.   Hardwire metadata into each online transaction.  A person’s location, the connections used, the computer’s identification number… Sure, these can be spoofed via proxies and offshore servers, but you make that a legal requirement, and thus give authorities another tool for prosecution.  System administrators can block problem users, and report them to a central agency, in much the same way banks report individuals to credit bureaus.  The user would be notified, and an appeal process would be available.  Of course, the electronic evidence trail would be quite specific and damning.  If a computer is blocked but used by various people, (such at a university or family) then the owner would be required to discipline the user.

1.5  Allow internet users, via various Internet services, to automatically block anyone with a suspect reputation.  An individual could even filter by various criteria.  Just as Google Chrome warns of suspicious sites, so could social media sites issue a warning when receiving email, instant messages, or other communications from irreputable individuals or computers (such as boiler room scams).

2.  Pseudonyms are sometimes required.  An individual might be at danger for posting information to the Internet which a government might consider seditious.  A person might have created a following on another website and become known by that screen name, just like a writer is known by a pen name.

3.  The Internet comment system is the electronic equivalent of a newspaper’s “letters to the editor” column.  While it is difficult to monitor comments on every article or web page, there can be alternatives.  Comment feeds can allow readers to rate other comments, and the comment system can hide or promote accordingly.  If the system is widespread, and uses services such as Facebook, Google, Disqus, Twitter or Yahoo for a commenter to login, then those systems can track the reputation of the user.  Of course, this can be abused if others bully a specific user, but then those individual can be identified as well.  (The system can even be programmed to check for people who stalk or bully an individual repeatedly.)

4.  Teach children the importance and responsibility of writing.  “Don’t write anything you don’t want being read in public” was a common warning back when that only meant paper and pen.  Now with instant caching and searching, it’s an even more critical skill.  Teach students how to write clearly, how to argue and debate politely (if deviously), and how to avoid being viewed as a jerk.

I don’t know what the future holds.  The Internet makes it so easy to find information, but it also makes it extremely easy to preach to the choir, to avoid anything which might shatter a fantasy or belief.  I would hope that extremes would be mitigated, in much the way they were a century ago when local newspapers would promote specific agendas without advocating extremes.

I don’t know if that will change.  Some big event, like Oklahoma City, fomented by extremists, won’t make an impact (we didn’t learn the lesson then, and politics has become even more partisan since).  Most likely, it will require a lot of different interests working together to make a positive change, but when no one is listening to anyone else, how do get people to work together?  Maybe interfaith initiatives can provide some guidance, but the problem with being a peacemaker is that you usually get shot from both sides of the battlefield.

Myself, I’ll continue to (try to) be tolerant and calm when confronted by impassioned commentary.  (Most of the time, I just walk away and ignore, refusing to read comments on Yahoo News, for example.)  It’s not easy, but life rarely is.

Now, I’m allowing comments, so be polite, intelligent, and understanding.  Constructive criticism is welcomed, and I enjoy discourse if it makes me think.

11 COMMENTS

  1. Heidi,

    Well said and a very interesting piece. I wish there was a way to market kindness and netiquette with the same “cool” cache’ that many I know assign to negative and self-destructive behaviors, e.g., how much alcohol/drugs they consumed & lived to tell about it, etc. It would be great to live in a world where those who were trolling and cyberbullying evolved into peaceful beings. How do we make kindness the new cool?

    Denise

  2. Yeah, that recent twitter blow-up with that troll and Mark Millar helping to shut him down was disturbing and I’m happy they took that psycho down. However, what you’re suggesting is ludicrous and dangerous for freedom of speech. I don’t think anyone wants the government to have such powerful eavesdropping capabilities that you’re suggesting.

    Dealing with trolls is just the cost of living in a free society. Sorry.

  3. The rest of this is interesting and wise, Torsten, but option 1 for a more civil internet is a terrible idea. Why? You answered it yourself with #2. And you can’t very well say “Well, but that’s only for people who deserve or need it” because the people who will use your paper trail against you never think you deserve your privacy or freedom.

    And it isn’t just governments, though they’re the most prominent danger to life and limb. Employers, stalkers, internet trolls… if the information is hardcoded in there, however secret it’s supposed to be, it’s now there to be found. It’s simply not safe.

    And it’s not just a matter of safe, it’s a matter of chilling effects. Anonymity – or at least psuedonymity – is a great gift of the internet. It lets people discuss, form commnunities and find support for all kinds of things that would be far more difficult with a real name attached.

    Granted, sometimes these things are ridiculous, such as Otherkin (At heart, I’m secretly a unicorn! Or werewolf!) or a love of My Little Pony that might be embarassing for people of a certain age and gender. But in other cases it can be quite seriously important such as for people who are HIV positive or are concerned about some other disease, people who are concerned about things going on in their communities or place of business and people who for one reason or another just don’t fit in with the people around them.

    Now, teaching children early that the Internet Is Forever, and you can never take back or erase words there that you regret, yes. That’s a good, even a vital concept. Trying to drum into their heads some sense that the people on the other side of the screen are real human beings who deserve respect is also important.

    Weirdly, I think gossip and the eternal memory of the internet is another vital, civilizing force that we need more of rather than less. Everyone makes gaffes from time to time, but if someone has indulged in malicious behavior, fraud, plagiarism or abuse of power, then yes. They deserve the word of mouth reputation that comes with that, and other people deserve to be warned.

  4. A couple of years ago I was standing outside, in a ..bad area, when a flat bed truck cam rushing up, the back of the thing packed with masked men, each holding an automatic weapon. I didn’t move and neither did my buddy. The guys on the truck mumbled among themselves for a moment and then it pulled away.. boy, I sure wish I had these evil tweeter-taker-downers there to protect me.

  5. And then.. just this very evening:

    I’m walking up 6th avenue, toward Times Square (admittedly, yeah, just a little in the bike lane.. but there’s plenty of room to pass) and this guy comes toward me on a ten speed, hunched down, doing his best Lance Armstrong .. 20 feet –15 –10 — I’m thinking .. gee.. this..this retard has plenty of room, he’s going to stop coming right-at-me at some point .. and so I just stop..don’t brace for impact or anything .. surely this retard will swerve (??) – and BOOM! .. idiot runs right into me – and rides off.
    Boy, I sure could’a used some help from these evil-twitter-taker- downers with that one 2.

  6. (Do you have anything interesting, relevant, or useful to contribute, Horatio?)

    I was on rec.art.comics.* back in the day, and remember the dawn of Perpetual September. The internet was simply a different beast in those days. Access was a privilege granted through trust relationships (e.g. my college got on through a consortium of universities) rather than something you just bought from a commercial ISP. It was possible to enforce the social norms of the classic net, because it was actually conceivable to kick a rogue site or abusive user off!

    But there’s no going back to that, any more than we can go back a society where everyone knew their neighbors. It’s too late to build that kind of authenticated accountability into the technology of the internet, and any attempt to do so would break it… either technologically or by sabotaging the cooperative nature of the net’s global participants.

  7. I was online prior to September 1993, and yes, UYSENET (and therefore, the whole internet) was substantially better before AOL peered with it.

    However, there’s nothing I resent more than a service that excludes some people because I’ve made a deliberate choice to obscure my online identity. No, I don’t want to join your online credentialing service. No, I don’t have a Facebook account. Yes, I’m blocking as much of your ads and tracking scripts as I can.

    For as much as Torsten thinks it would improve discourse to verify identity, I think it’s more important to enable privacy to exist. That’s part of the beauty and joy of the internet, after all. At least, it was in 1992 when I first got online.

    Slashdot.org has a really great system for handling moderation. Users have a login and they can choose to associate that login with other identifying information. They can also post anonymously. Other users moderate the comments on a scale from -1 to 5. Users who are consistently moderated up are given a bonus to the scores on their posts. Users who are consistently moderated down are penalized until their default score passes below the normal threshold for viewing. The more interest a comment thread has, the more and better the moderation will be, and it’s all controlled by the users rather than the site’s admins.

    That is a model I can live with.

  8. Thanks for the feedback.

    I notice that Google Plus is fairly civilized, although I do notice some idiocy in the comments, either snark or people trying to hook up.

    Facebook as well seems a nice place to chat… and I’ve seen lots of arguments, but people tend to shake hands afterwards, or at least move on.

    Perhaps it is the ability to control who you share information with, who you “friend”. I think part of it is also the identification; you know with whom you are engaging.

    But then there ARE sites which have spirited discussions and asshattery is kept to a minimum. Wired.com, for example.

    Other sites have shut down their comment threads, like DC Comics. Some cartoonists as well.

    I wonder, what happens if an actual site like Yahoo or YouTube adjusts their policy to say they will contact the user’s ISP to report abuse, and, if the ISP’s users cause too many violations, then the ISP itself will be blocked? (A bit like a DCMA warning…)

    HEADLINE: “Ghana blocked from internet by rest of world due to scammers, Togo domains proliferate”

  9. The internet has evolved at such an exponentially fast rate that it’s much, much too vast and too useful to employ any kind of system-wide ID check policing. But with that increase in available audience, I’m always surprised at how little personal control bloggers and site administrators employ on their own areas of the internet. If you ran a print magazine, you wouldn’t just publish every letter you receive, nor would you give up entirely because the amount of letters is too much. You would sift through the correspondence, filter it, edit it, and whittle it down into a compelling sample of argument and commentary. It should be the same online.

    Most sites regulate comments in some way, either by keeping them in quarantine or by actively reviewing them as they’re posted. Yet of the sites I visit most often, only the NY Times seems to have it down pat, with a user ratings system, an editorial ratings system, AND an active editor who moderates, deletes, and otherwise keeps it moving until they get tired, at which point they close the comments.

    Some sites say they don’t want to inhibit the conversation like that and will only monitor for insults or harmful speech, but what does that do for the intelligent discourse some of your viewers may come to your site to enjoy? If trolls run roughshod over your comments with distracting, pointless, badly argued tangents, but in a nice way that doesn’t threaten everybody, what exactly is gained?

    The best solution is for site admins to accept their new role as editors-in-chief and use that authority to guide discussion.

  10. I’m a veteran of the Usenet years as well, and sometimes I miss the way those groups seemed to be self-policing. New posters completely lacking in netiquette could be gently pointed in the right direction: it was usually quite civil. But there’s no going back to that, so I’ve definitely become a fan of having a moderator. It’s not always practical to check everything before it’s posted, but you can at least put a stop to bad behavior. And nobody has even mentioned the robot spam (multilevel marketing scams and such) that I see on some boards occasionally. I know the filter on my WordPress blog catches that stuff pretty regularly.

Comments are closed.