Jump to content

.chat.ru


marhleet

Recommended Posts

http://www.spamcop.net/sc?id=z2514297968zb...02386c49e7f335z

Finding links in message body

Recurse multipart:

Parsing text part

Parsing HTML part

Resolving link obfuscation

http://harrisphwo.chat.ru

Please make sure this email IS spam:

these russian emails will nearly always come up with a blank line on the processing page.

the link obfuscation will sometimes work quickly with an F5

sometimes 40 F5's are needed

sometimes won't work at all

if the web address is pasted in to SpamCop on it's own, it always processes straight away

Parsing input: http://harrisphwo.chat.ru

Routing details for 195.161.119.84

[refresh/show] Cached whois for 195.161.119.84 : yuri[at]unix.ru

Using last resort contacts yuri[at]unix.ru

Statistics:

195.161.119.84 not listed in bl.spamcop.net

More Information..

195.161.119.84 not listed in dnsbl.njabl.org

195.161.119.84 not listed in dnsbl.njabl.org

195.161.119.84 not listed in cbl.abuseat.org

195.161.119.84 not listed in dnsbl.sorbs.net

Reporting addresses:

yuri[at]unix.ru

thinking this is a SpamCop processing glitch that needs fixing.

(wouldn't have found this if good old OptusNet hadn't started 5.7.1 failing my quick submit entries. now I gots to manually do 75% of my spam again. bugga)

Link to comment
Share on other sites

Hi, marhleet,

...IIUC, as with your SpamCop Forum entry "Replying to spam using '404' call backs ?," your questions are addressed in the "SpamCop FAQ" (see links so labeled near top of page) entry labeled "SpamCop reporting of spamvertized sites - some philosophy." Please read that FAQ entry and return here with any questions you may still have. Thanks!

Link to comment
Share on other sites

IIUC, as with your SpamCop Forum entry "Replying to spam using '404' call backs ?," your questions are addressed in the "SpamCop FAQ" (see links so labeled near top of page) entry labeled "SpamCop reporting of spamvertized sites - some philosophy." Please read that FAQ entry and return here with any questions you may still have. Thanks!

this ones different

the link resolution does NOT report failure

it shows nothing, as shown in the quote section at the top.

yet the web address on it's own ALWAYS shows details.

the chines ones clearly showed the red warning about failure to locate source IP whatsys.

the .chat.ru always work.

Resolving link obfuscation

http://harrisphwo.chat.ru

Host harrisphwo.chat.ru (checking ip) = 195.161.119.84

host 195.161.119.84 = srv84.chat.ru (cached)

Please make sure this email IS spam:

this one only took 3 F5's

yet it didn't come up with yuri's email to post to. odd. that's a new first.

Link to comment
Share on other sites

this ones different

the link resolution does NOT report failure

it shows nothing, as shown in the quote section at the top.

<snip>

...Perhaps, but the FAQ entry to which I referred you is also relevant for this (just a different part of it, perhaps):
<snip>

However important anyone may think that disrupting the relationships

between the spamsites and their providers may be, realize that the

various spamfighting tools are designed for specific purposes. If you

start trying to drive a nail with a screwdriver, you're going to find

that it doesn't work as well as a hammer -- likewise some other tool

related examples.

In the case of spamcop, its parser is designed to determine spamsources

*primarily* [iMO] and secondarily do things like feed possible relays to

the relay testers for 'handling' like testing/listing and to notify

providers for spamsources and spamvertisers.

<snip>

So, what all of that comes down to is that the business which SC

performs of finding the spamvertisers in the body isn't as important as

the business of SC finding the spamsource -- because the spamsource

determination feeds the SCbl, whereas the spamvertiser discoveries tends

to notify blackhat providers of things about spamcop reporters and

doesn't feed anything very potent at all.

If you want to get into taking action against the business of

spamsupport, which is what spamvertiser providers are doing, then you

will have to appreciate blocklists which put leverage against them, such

as spews and to a lesser extent spamhaus.

What spews does is spews business. What SC does is SC's business. The

two lists are very very different and SC's doesn't do anything about

spamvertisers or spam support.

<snip>

There are many reasons why a URL that a sentient being can see in a spam

may not be seen by the parser.

<snip>

Link to comment
Share on other sites

<snip>

should I accept that I am fighting the open relays and not also the web sites, where possible ?

...If what you mean by "open relays" is sources of spam (in the sense of the IP address of the machine that is sending the spam) then I would say yes.
Link to comment
Share on other sites

I figured as long as Yuri's inbox is getting lots of official SC emails he might actually do something.

should I accept that I am fighting the open relays and not also the web sites, where possible ?

My view (and I recognise others may have a different opinion) is that reporting spamvertised URLs via SpamCop is an entirely fruitless activity. I've seen no evidence of SC reports of spamvertised URLs being effective in closing the websites involved. If this is an important activity there are probably far more effective methods of tackling this issue.

Of course, if you're not quick reporting, it does no harm to include the spamvertised URL reports but I wouldn't waste any sleep about any failures in the URL reporting.

Likewise if you have large numbers that need manual reporting then simply choose as many as you have time for. In the end your time is more valuable to you than being overly burdened by reporting spam. Doing as much as you can in the time you have available is appreciated and valuable.

Thanks.

Andrew

Link to comment
Share on other sites

My view (and I recognise others may have a different opinion) is that reporting spamvertised URLs via SpamCop is an entirely fruitless activity.
I tend to agree, to the extent that I think it is difficult to topple a spammer's entire web enterprise via a handful of complaints to a single outfit. There are simply too many pieces too well dispersed (botnets, reverse proxies, portals, crooked DNS, etc.) to make these vulnerable to isolated complaints. Where it is convenient for me, or where the offense is egregious enough, I do often report URLs found in spam, but not so much these days as in the past. I do still like to trace these down occasionally to see what these people get up to.

On the other hand, the spam websites are useless when the mail doesn't go through and no suckers are persuaded to visit them. This is where SpamCop is useful, in identifying and publishing the sources of spam mail in real time, for those who wish to block or detain the spam.

Anyone who wishes to follow up on spam websites might find some good info on the WIki at ReportingSpamWebsites.

-- rick

Link to comment
Share on other sites

thanx for all the pointers.

can I point out, that the issue raised in the opening section of this thread has not been addressed by anyone.

the resolving of the link obfuscation ...

should yield a result, or show a failure, in red.

showing nothing is a bug.

my point, that after a re-fresh it sometimes does show, means it can do it. why is it that it is NOT doing it.

many have said it is nice to send reports to those responsible for tha spam web sites.

without the resolution of the web page and providing the links to send reports to, this part of the deal fails.

so, without sending me back to the FAQs on what and why SpamCop likes to do, can someone explain the bug where it is not doing what it tries to do ?

Link to comment
Share on other sites

...so, without sending me back to the FAQs on what and why SpamCop likes to do, can someone explain the bug where it is not doing what it tries to do ?
Not sure that anyone 'here' is able to explain it. We're users, like you. Don (SC Admin) is about the only SC staff member who comes here with any frequency and I don't recall that he ever made any claims to be be a SC spokesperson.

It (the bug) is a known issue, commented on many times in passing, here in the forums and in the newsgroups - even with its own topic/thread on occasion IIRC. When the DNS-type lookup fails to resolve within whatever small amount processing time is allowed for the actual server hosting the parse there seem to be a number of responses given - or none at all, as you have seen. The background you have been pointed to shows URL resolution, interface with the SURBL and informative reports to the host are not priorities for SpamCop. Nor is fixing a minor, irritating bug associated with those functions, apparently. Many users would wish it otherwise. SC seemingly prefers to concentrate on its "main mission".

At the end of the day, as you have found, a user can usually get a reporting address from the parser (as you mentioned first-up) for either manual (outside of SC) or user-specified reporting (paying users). Another part of the reason, no doubt, SC has not hastened to rectify the bug.

Failure to resolve (if not the bug itself) I think has been well covered - but to recapitulate: Obviously the parser hasn't the time to wait for a 'difficult' resolution due to volumes processed. Using the parser in the secondary mode of resolving just the URL involves less resource which is why, as you have seen, it sometimes/often works (can also get reporting addresses for IP addresses, even e-mail addresses that way, BTW). Logically, the time of the day might make a difference - any time the parser and the rest of the DNS resolution process is under less demand is more likely to produce a result (being 15-18 hours ahead of most of the US myself I seem to get better results than the 'locals'). We have seen slightly different behaviors from different SC servers in handling some, similar, processing. That could be a factor. The processing of all/the totality of the parts of a parse is suspected not to be linear - there may be some parts that complete (as much as they are going to) before others that are printed earlier in the parse output, which makes it slightly loopy to read. Some networks may be blocking or hindering SC lookups which might be an associated factor (why SC doesn't get very excited about the whole thing). And in your concurrent topic (http://forum.spamcop.net/forums/index.php?showtopic=10003), we've explored the 'Russian doll' (layered) aspects of the wretched .CN links which also mostly/often fail to resolve and can conclude SC's handling of those is quite ineffectual, even when it works - another reason official enthusiasm may not attach to anything related to the resolution of URL.

The only people who would know for sure don't talk about it. With reason, I guess. You or I might think an unfixed bug is not very professional. But we're not paying the bills. Well, I'm not.

Link to comment
Share on other sites

the resolving of the link obfuscation ...

should yield a result, or show a failure, in red.

showing nothing is a bug.

my point, that after a re-fresh it sometimes does show, means it can do it. why is it that it is NOT doing it.

I don't think that failing to show the trace is necessarily a "bug," more likely one would call it "failure to do extra duty on this particular message." I see that Farelf has addressed this pretty well, so I will not step on his answer to you.

many have said it is nice to send reports to those responsible for tha spam web sites.

without the resolution of the web page and providing the links to send reports to, this part of the deal fails.

What is the nature of the "deal" that you feel has failed? Anyway, you are perfectly entitled to report these websites yourself, you do not need to do it through SpamCop (and, as you have found, SpamCop can help you do this, since these queries are outside the "critical path" of spam reporting).

so, without sending me back to the FAQs on what and why SpamCop likes to do, can someone explain the bug where it is not doing what it tries to do ?

I'll try not to take it personally that you reject the FAQ and Wiki materials offered to you, even though I myself (among others here) have spent a great deal of personal time and effort on them. The reason you get sent to the FAQ is not because we are lazy or uninterested, it is simply that you have asked questions that have appeared here dozens upon dozens of times in the past, questions for which we have attempted to compile and refine standard answers. I believe you were encouraged to read the FAQ and then come back with any questions you still have. If you still have questions, or the matters are not clear, then you give us an opportunity to improve the canned answers for the next askers.

-- rick

Link to comment
Share on other sites

I'll try not to take it personally that you reject the FAQ and Wiki materials offered to you, even though I myself (among others here) have spent a great deal of personal time and effort on them. The reason you get sent to the FAQ is not because we are lazy or uninterested, it is simply that you have asked questions that have appeared here dozens upon dozens of times in the past, questions for which we have attempted to compile and refine standard answers. I believe you were encouraged to read the FAQ and then come back with any questions you still have. If you still have questions, or the matters are not clear, then you give us an opportunity to improve the canned answers for the next askers.

Thanks for that block of great thoughts and words. Totally removed the 'need' for one of those nasty Wazoo posts!

Kudos also to Farelf's encapsulation of so much material.

Link to comment
Share on other sites

can I point out, that the issue raised in the opening section of this thread has not been addressed by anyone.

the resolving of the link obfuscation ...

should yield a result, or show a failure, in red.

showing nothing is a bug.

my point, that after a re-fresh it sometimes does show, means it can do it. why is it that it is NOT doing it.

To say it a different way: the parser does not wait to get the current correct address. official spamcop knows that it doesn't. Apparently (and explicitly by Julian who created spamcop, but sold it years ago) official spamcop has abandoned efforts to accurately report spamvertised websites, concentrating on the source of the spam because the object of server admins is to prevent spam by filtering with blocklists. The code involving the spamvertised sites remains, partially, from my observation, because when it does work, some people are interested in using spamvertised sites in their anti-spam efforts. There are two ways they do this: one is to filter using blocklists of spamvertised websites. The other is to report them to registrars and whoever Knujon reports them to. The spamvertised website blocklists use other sources as well as spamcop so any that spamcop misses are picked up in other ways. And, if the reporter is diligent, they can get a correct URL - either from spamcop or in other ways - to report via other means.

It is well known that the fast-flux websites do not resolve via spamcop. Most reporters do not care because they use quick reporting which does not even look at the spamvertised websites. And, apparently, spamcop isn't interested in trying to fix the problem - including not showing anything if there is a failure. spamcop is designed to be used by professionals - many things are obvious to professionals and don't require failure messages. Non-professional users either learn enough so that they know what to look for or eventually stop using spamcop. Since there is no other reporting service, it is either use spamcop or JHD (just hit delete).

many have said it is nice to send reports to those responsible for tha spam web sites.

without the resolution of the web page and providing the links to send reports to, this part of the deal fails.

Those who think it is important to report websites, do so by using Complainerator or Knujon or do so manually. If they use spamcop, it is only a tool - as people have pointed out, you can get the parser to find the correct address by pasting individual URLs in the webform.

In fact, spamcop is only a tool. It is the reporter who is making the report. To use spamcop to automatically report spamvertised websites is like using a screwdriver to hammer a nail. It works really well at finding the source and in keeping it listed as long as spam comes from that source. It often is an early warning signal to responsible server admins that something has gone wrong with their spam control (though, in my experience, they pay more attention to a manual report). And the blocklist is used by many server admins as one of their filters to prevent spam from entering their system, catching spam from new sources. (the perpetual sources of spam are caught by using one of the other blocklists available that do not delist automatically since spammers discovered that if they rotate IPs, they can be delisted from spamcop. It takes longer to get listed at other blocklists so server admins still use spamcop)

so, without sending me back to the FAQs on what and why SpamCop likes to do, can someone explain the bug where it is not doing what it tries to do ?
I think it was already explained. The parser tries to look up the address, but does not wait or retry. spamcop doesn't care whether it succeeds or not and assumes that users don't care either so it doesn't put a failure message.

There is a lot of information here on /how/ to find links yourself if you are interested.

Miss Betsy

Link to comment
Share on other sites

Some further observations...

the resolving of the link obfuscation ...

should yield a result, or show a failure, in red.

showing nothing is a bug.

Maybe we have a terminology issue here. "Removing link obfuscation" is NOT the same as resolving the site to an IP address. Sometimes, spammers try to obscure their web links using various well-known techniques. So, the first thing that SpamCop would have to do is to pass the URL through a de-obfuscation process to remove this bogus encoding. The result is a clean, unencoded URL (which you see in your results). After that, it is still necessary to DNS-resolve the URL to find out where it sits (i.e., which IP address). This latter is what SpamCop failed to do in this case, for reasons already noted.

In other words, SpamCop did actually complete the deobfuscation of the link and reported its results to you. So, if there is a bug, it is probably not in the deobfuscation process.

-- rick

Link to comment
Share on other sites

without the resolution of the web page and providing the links to send reports to, this part of the deal fails.
I also have been in receipt of a steady stream of ".chat.ru" spams for several weeks now. These links mostly aren't resolved in the SpamCop parse. However I also have Knujon as a standard co-recipient of my SC reports and believe me, lack of SpamCop resolution of the site doesn't deter Knujon in the least from resolving and "doing their thing" with it.
Link to comment
Share on other sites

I also have been in receipt of a steady stream of ".chat.ru" spams for several weeks now. These links mostly aren't resolved in the SpamCop parse. However I also have Knujon as a standard co-recipient of my SC reports and believe me, lack of SpamCop resolution of the site doesn't deter Knujon in the least from resolving and "doing their thing" with it.

Knujon is a good place to forward this sort of mail, since they can accumulate it and present evidence in bulk to registrars, hosting providers, etc. This is probably far more effective than the "water torture" from streams of individual SpamCop reports.

-- rick

Link to comment
Share on other sites

And, apparently, spamcop isn't interested in trying to fix the problem - including not showing anything if there is a failure. spamcop is designed to be used by professionals - many things are obvious to professionals and don't require failure messages.

I don't agree at all with the notion that not showing a failure message is something justifiable in any way.

It's all nice and well that Spamcop puts its focus on identifying and larting relays, and that spamvertised page reporting is just a side task that Spamcop may or may not find the time to do. However, even when it is an optional thing, it *has* to give proper feedback. Silent failure is a total no-no in *any* UI.

Insinuating that only n00bs would be irritated by an obviously missing error message is not acceptable. I consider myself a professional in both the spam and UI design business, and I was irritated myself by this silent failure before (and even posted about it here years ago, IIRC).

Aggregating the number of posts in this forum which report this same thing shows that a sizeable amount of time is wasted by both posters and people trying to help them, pointing them to the FAQ again and again, complete with repeated grumbling that results from something that is simply bad quality in a service many people here pay for yearly through their Spamcop Email subscription (me included).

It would save everybody a lot of time and frustration if the few minutes of developer time needed to introduce a message like "Timeout while resolving xyz" or "Gave up on resolving xyz" would be invested some of these days.

Link to comment
Share on other sites

It was very frustrating to me as a technically non-fluent person that the developers of spamcop did not see fit to include failure (and other messages) that were intelligible to me. However, people in any business tend to omit what to them is 'obvious' and to use jargon that outsiders don't comprehend. (I had no clue what UI is, for instance, until I looked it up). There are always different levels of users - to the people I work with I am a computer wizard! For instance, finding out how to alphabetize the icons in a SharedDocs screen caused the next user panic because it looked 'different' - it was not something that I thought needed an introduction to. I thought that the next user would say, 'Thank God, the icons are alphabetized!' spamcop doesn't seem to want to cater to that level of understanding.

I was merely guessing that this particular failure would be 'obvious' to professionals as a reason not to insert a failure message.

Miss Betsy

Link to comment
Share on other sites

I don't agree at all with the notion that not showing a failure message is something justifiable in any way.
What is the failure that you think is not being reported?

Ok, this time, by actually reading your complete post, I see your point and I agree. You just want SpamCop to confess that it tried to resolve the link but did not finish.

Because most of these cases come about (I think) due to spammers' jackleg DNS, you can't really call it a bug or failure on SpamCop's part (maybe a lack of patience, but that's about it). So, it would be important to word this message in such a way as to put the blame where it belongs (like "Abandoned, external nameservers are slow" or the like). Otherwise, a lot of people are going to be reporting "bugs" where there really are none (or are there?).

-- rick

Link to comment
Share on other sites

However, even when it is an optional thing, it *has* to give proper feedback. Silent failure is a total no-no in *any* UI.

Insinuating that only n00bs would be irritated by an obviously missing error message is not acceptable. I consider myself a professional in both the spam and UI design business, and I was irritated myself by this silent failure before (and even posted about it here years ago, IIRC).

Aggregating the number of posts in this forum which report this same thing shows that a sizeable amount of time is wasted by both posters and people trying to help them, pointing them to the FAQ again and again, complete with repeated grumbling that results from something that is simply bad quality in a service many people here pay for yearly through their Spamcop Email subscription (me included).

It would save everybody a lot of time and frustration if the few minutes of developer time needed to introduce a message like "Timeout while resolving xyz" or "Gave up on resolving xyz" would be invested some of these days.

Over-quoting involved here, but .... the flip side to your ramble here is that so many folks don't actually see this type of page output to begin with. If you want to reference the posts here, then see if you can accumulate those that include the suggstion to "turn on Full/Technical details" such that this type of data would become apparent.

Add to ths the times that it has been additionally stated ..."None of this removes the fact that you can generate and submit your own complaint to the appropriate party."

Link to comment
Share on other sites

<snip>

a sizeable amount of time is wasted by both posters and people trying to help them, pointing them to the FAQ again and again

<snip>

...Interesting! When I first read this sentence, I mistakenly thought you were criticizing those of us who reply to the complaint by pointing to the FAQ. Since that is not what you were doing, I'll just say that pointing to the FAQ is a lot less time-wasting than posting the same "full" answer over and over. :) <g>
bad quality in a service many people here pay for yearly through their Spamcop Email subscription (me included).

<snip>

...IIUC, you're paying principally for the e-mail service, not the parsing results (access to which SpamCop offers for free). I've never seen SpamCop ever offer or claim to provide quality hyperlink resolution. That's just a bonus, when and if it works, and only to report to the "owner" of the spamvertized site that it is being used in that way, not to add to a blacklist or take any other negative action.

...Personally, I'm coming to the conclusion that we'd all be better served if the SpamCop parser would stop even trying to resolve and report spamvertized web sites, however much a benefit it can be when it does work.

Link to comment
Share on other sites

I guess a lot of people figure that when SpamCop doesn't resolve and report a spam URL, SpamCop is doing 95% of the job and then petulantly refusing to do the last 5%. As someone who has spent 10+ years studying the techniques spammers use to cover their tracks, I can tell you that this is not the case -- tracing out spam URLs can be a confusing nightmare compared to simply finding out where the original message came from. After having wrestled with this stuff myself for so long, I have a certain amount of sympathy for the poor SC parser. As a programmer of journeyman skill at least, I also pale at the thought of having to write all the code that could successfully do the job.

For example, to find the source (originating IP address) of a spam message, you have a relatively straightforward task:

  1. Parse the header and get the source IP address (and, optionally, any open relays that come up after this address).
  2. Do an IP-WHOIS lookup on each of these addresses and get the abuse contact info.
  3. Send a report to the abuse contact, explaining that the spam was either originated by or was relayed by the host at the address in question.

These are largely mechanical procedures, and deal with matters of fact (and not judgment). You can train a computer to do them (as the SC developers have done). Even at that, however, there is still a great deal of complication (e.g., SC users must run Mail Host Configuration in order for their messages to be correctly parsed).

On the other hand, if you really, really want to deal with spam websites, this is the kind of vastly more complicated protocol that you generally have to follow:

  1. Recognize and isolate the links found in the spam.
  2. Deobfuscate the links if they have been massaged by overencoding (MIME or HTML-CE).
  3. Repair the links if they have been deliberately munged (whitespace or non-printable characters inserted, etc.)
  4. Determine from context of the message whether they are actually directly connected to the spam and not:

    1. Links that were inserted by others later on after the message left the spammer's hands (e.g., advertisements placed by freemail services)
    2. Links that are "innocent bystanders" (e.g., news or stock websites often quoted by spammers)
    3. Links that have nothing to do with the spam, and have been placed by the spammer simply to confuse n00b investigators.

[*]Fetch each of the URLs (preferably using curl or wget rather than a browser) to see whether they redirect to other sites by any of half-a-dozen common mechanisms. If they do, add these other sites to your list of URLs to investigate.

[*]Run a DNS lookup (e.g., "dig a") on each of the URLs.

[*]Inspect the DNS printout to see whether more than one IP address is provided, and whether they have abnormally low TTL (sure sign of botnet activity).

[*]If you can't get a DNS result, run a "dig ns" and get the authoritative name servers, then repeat the last two steps for each of these.

[*]If you suspect a botnet, then monitor DNS activity for the URL for as long as possible (days or weeks at least) to find all of the addresses that are used. There will be THOUSANDS of them. And, oh, by the way, the authoritative name servers are on the botnet too, so you have to collect all of those addresses as well.

[*]Run an IP-WHOIS query on each of the IP addresses you collected, and find the abuse contacts (if any) pertaining to these addresses.

[*]Compose reports to each of these contacts, with explanations as to why you are contacting them (e.g., "The URL in the attached spam redirects to another URL which appeared to be hosted in your domain for about three minutes earlier today")

[*]For extra credit, look up the domain of each of these URLs in WHOIS, get the identity of the registrar that sold it. Then, root around on that registrar's website in the (possibly vain) attempt to find a mechanism for reporting spam. Ignore the actual registrant info, it is sure to be either cloaked or forged.

[*]Oh, yeah, if you are dealing with botnet hosting, rest assured that you have NOT reached the actual spammer's web host, which is hidden behind a wall of proxies that you cannot penetrate.

There are obviously many steps here (and I've either left out or condensed a few); most of them require human judgment. For example you cannot teach a computer to figure out whether a given URL is actually the 'spammy' link as opposed to being simply camouflage.

Even if you could get a machine to do this kind of thing reliably and accurately, you can readily see how much more work it involves. I think this is far more work than you would want to wait for every time you press the "submit" button. Also, this work does not help SpamCop carry out its principal tasks: filtering out spam deliveries for its paid users, and maintaining the SpamCop blocking list for those who want to use it to filter their mail (or that of their customers or users).

Those outfits that have had success in dealing with spam websites generally tackle the problem by targeting DNS providers and domain registrars rather than individual web server IP addresses; it takes awhile to collect the kind of evidence you need in order to make a case, and errant providers generally have to be LARTed several times before they respond (if in fact they do respond, and if they don't simply lie about having done so).

-- rick

Link to comment
Share on other sites

... this is the kind of vastly more complicated protocol that you generally have to follow: ...
Thanks for crystallizing that out Rick - worthy of preservation for future reference/referral I think. While all of that (and much more) is covered in your 'spamweb' pages, I don't think there's anything quite so compact/single page there? For these pages an outline is a useful thing to be able to pull up (though I can't quickly check against what's in the Wiki at the moment).
Link to comment
Share on other sites

...Personally, I'm coming to the conclusion that we'd all be better served if the SpamCop parser would stop even trying to resolve and report spamvertized web sites, however much a benefit it can be when it does work.
spamcop did announce it was going to stop and got a lot of feedback from those who do use spamvertised websites to filter for spam. One admin told me in the ngs that about 25% of his spam was caught by using a list of spamvertised websites. I didn't follow the discussion carefully since I am not a server admin, but I think that spamcop 'feeds' the blocklist (is it SURBS?) that lists spamvertised websites. They have other sources, but, of course, spamcop is a great source, even with its brokenness.

And, newbies learn a lot about how spammers work by asking the question! The dynamics of spam and spam fighting covers a lot of ground - from the technicalities of how the internet works to the social considerations of censorship, greed, freedom and responsibility to even how different occupations attract certain personality types and the difficulty of right brainers communicating with left brainers.

Actually, the problem here is communication. The OP and another poster maintaining that an error message should be sent - and, apparently, the spamcop developers thinking that users should understand that if the parser doesn't do whatever, it can't do whatever (as rconner said, to the knowledgeable, the process of looking up spamvertised URLS is extremely complicated). From a communicator standpoint, you would think that they would state right up front in the instructions, that the parser look up of spamvertised sites is limited and not recommended, but is there for those who want to use it as a part of tool set.

It wouldn't actually hurt to throw in some history - when spamcop began, reports to the abuse address of the web host were educational both to the web host and the person sending spam - who were often legitimate businesses (both online and offline). It alerted a lot of people to the concept that spam is NOT like snail junk mail, but could entirely choke the email system just with legitimate offers and that unsolicited bulk email was not possible. Within a very short time, in spite of angry protests from some individuals, legitimate bulk emailing started using confirmed subscription lists (or at least, they tell you somewhere on their website, who you will be getting email from if you buy something). Now, reporting of spamvertised websites almost never is about a legitimate person selling a legitimate product.

Miss Betsy

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...