problems sending to prodigy.net hosted email

We are having issues with domains hosted on prodigy.net email servers
including att.net, bellsouth.net, and scbglobal.net.

We are being rejected for bad reverse DNS, but DNS is setup correctly.
The error we are receiving is:
Remote host said: 550 5.7.1 Connections not accepted from servers
without a valid sender domain.flph829 Fix reverse DNS for 74.252.14.252

I leave it up to the reader to test the validity of 74.252.14.252, but
every test we've done looks good.

The MX records for these domains indicate this (identical on the three
domains mentioned above):
;; ANSWER SECTION:
att.net. 175 IN MX 5 al-ip4-mx-vip1.prodigy.net.
att.net. 175 IN MX 5 ff-ip4-mx-vip2.prodigy.net.
att.net. 175 IN MX 5 ff-ip4-mx-vip1.prodigy.net.
att.net. 175 IN MX 5 al-ip4-mx-vip2.prodigy.net.

Everything we can find on the postmaster pages, forums, etc. point to
emailing abuse_rbl@abuse-att.net. We have done this and received their
autoresponder. We've waited the requisite 48 hours and emailed again
for an escalation only to receive another autoresponder with another
ticket number attached (even though we emailed with a ticket number in
the message). This has now been ongoing since at least March 4, when
we received our first complaint and we have yet to hear anything from
AT&T. We don't currently have any direct contacts for Prodigy.net.

I will say that the 75.252.14.0/24 netblock is owned by AT&T and I'm
wondering if that might be causing the issue. For instance, they are
trying to do a local lookup of the PTR record instead of contacting the
delegated servers.

I'm hoping someone here has a point of contact which I might reach out
to in order to correct this issue. Any help would be appreciated.

Trey Nolen

We are having issues with domains hosted on prodigy.net email servers
including att.net, bellsouth.net, and scbglobal.net.

We are being rejected for bad reverse DNS, but DNS is setup correctly.
The error we are receiving is:
Remote host said: 550 5.7.1 Connections not accepted from servers
without a valid sender domain.flph829 Fix reverse DNS for 74.252.14.252

I leave it up to the reader to test the validity of 74.252.14.252, but
every test we've done looks good.

The MX records for these domains indicate this (identical on the three
domains mentioned above):
;; ANSWER SECTION:
att.net. 175 IN MX 5 al-ip4-mx-vip1.prodigy.net.
att.net. 175 IN MX 5 ff-ip4-mx-vip2.prodigy.net.
att.net. 175 IN MX 5 ff-ip4-mx-vip1.prodigy.net.
att.net. 175 IN MX 5 al-ip4-mx-vip2.prodigy.net.

(Cavaet: it's been a long, LONG time since I have sent ANY mail to prodigy.net, att.net, bellsouth.net, or sbcglobal.net. So I don't have any advice specific to sending mail to servers hosting domains on those services. Rather, I've concentrated on what I understand to be Best Practice for mail admins.)

[satch@c74-admin ~]$ dig +short -x 74.252.14.252
mail.internetpro.net.
[satch@c74-admin ~]$ dig +short mail.internetpro.net.
74.252.14.252

OK, forward and reverse match.

One interesting point: When I did the full reverse lookup of the IP address (without the +short), this was part of the ADDITIONAL SECTION:

ns2.internetpro.net. 5999 IN A 74.252.14.252

I would suspect that some people would look at you oddly when your mail server is also one of your authoritative name servers. I know it's stupid, but mail admins have for years been trying to figure out behavioral habits and stigmata of spammers. Are you short of IP addresses, or stingy with servers?

(I know in my consulting practice I strongly discourage having ANY other significant services on DNS servers. RADIUS and DHCP, ok, but not mail or web. For CPanel and PLESK web boxes, have the NS records point to a pair of DNS-dedicated servers, and sync the zone files with the ones on the Web boxes.)

That said, I think I see a potential set of problems. The TTL on your PTR record is too short. Best Practices call for the TTL to be at least 86400, if not longer. Snowshoe spammers tend to have short TTLs on DNS records so that it's easy to shift to cleaner IP addresses when the current IP address' reputation is sullied by ne'er-do-well customers or hijackers.

6000 is only 1.5+ hours. In all the formulas I've seen published, the accepted TTL was NEVER less than 14400, or four hours.

And your name server record TTLs should be MUCH longer, like 864000 (10 days). Too short a TTL for NS (and SOA) records can be a big red flag for some mail operators.

What does SPF say for the domains in question? Also, what is the TTL on the TXT records?

Why not? Never had a problem with multiple services on linux, in
contrast to windows where every service requires its own box (or at
least vm).

Go for it ! Failure is an awesome teacher :slight_smile:

Don't really see a problem, especially since you normally always have
two DNS servers...

1. Spreading the attack surface across multiple system. Just because someone is nailing your web server doesn't affect your DNS server, or mail, or authentication, or logging, or...

2. Robustness during maintenance. Browsers cache DNS responses, including NXDOMAIN responses. Just because your web server is inaccessible while you do something with it doesn't mean the browser is completely disconnect when you bring it back up. Ditto mail servers

3. Application-specifc attack mitigation on each type of server. It's far easier to lock down a DNS server if you don't have any other significant servers running. Ditto for mail, ditto for Web, ditto for syslog servers, and so forth.

4. Limit what an attacker can do if s/he "breaks through" your protections. I even go so far as to impose severe limits on the internet, nominally "trusted" network, to minimize cross-server attacks through the local network. In short, systems should *not* blindly trust neighboring systems.

I don't like publicly facing VMs. They are find for internal functions that are *not* exposed to the world. (You can't help it with cloud-hosted objects, but just remember that cloud servers can be compromised just as easily as iron-hosted servers, perhaps even more easily.)

About cloud: I prefer that any cloud-hosted servers not have mission-critical functions. The best use of cloud servers, in my opinion, is to host user interface tasks, and tunnel through to servers in my machine room for data. Depends on the application, but my critical data is in my machines.

I still recall when I was working on APRAnet and the Morris worm first launched. I also recall when the IMPs were flipping bits so that packets would have addresses changed and start running forever until the entire ARPAnet was restarted. That's when TTL was introduced into the NCP implementation at the time.

Two DNS servers hosted on one box (or VM object), even with two addresses, is easily compromised by DDoS amplification attacks. That's the norm for a number of "web control panel" systems like Plesk and CPanel.

It depends on the scale of your operations. Last time I was in that situation, I had roughly 25,000 domains spread across 30 servers. Life became MUCH simpler when I put up dedicated, and high-power, physical systems running non-recursive BIND for DNS1 and DNS2, as well as another pair of boxes running recursive servers as DNS3 and DNS4.

Getting QMail and Exim to "smart host" to my monster MX servers proved to be pretty easy, and I even was able to get the web servers to tell me when a mailbox was full so I could reject the SMTP exchange at the edge, instead of generating backscatter.

And, with a pool of roughly 4,000 IP addresses, I got rid of ARP storms in our network by putting up a little server called "ackbar", that was configured to respond to all otherwise unused IP address in our pool. (Edge routers were Cisco 7000 class, with DS3 uplinks.)

Lessons learned well.

If this isn't pertinent to the list, feel free to answer privately. How did you implement the server that got rid of ARP storms?

Charles Bronson

Perhaps something like an ARP sponge?
https://ams-ix.net/technical/specifications-descriptions/controlling-arp-traffic-on-ams-ix-platform

Linux systems have the ability, given enough RAM, to associate almost any number of IP addresses to a given interface. Our IP allocation database kept track of who was using what IP address. I wrote some queries to collect all unassigned IP addresses, and to construct the appropriate shell commands to assign those IP addresses to Ackbar's interface. Part of the program would also remove any allocated IP addresses from the server automtically.

Worked like a charm.

Whenever someone would nmap our address space, there would be at most one ARP request for the address; the router would then remember the IP->MAC association for the subsequent scans for a period of time -- 30 minutes if we were renumbering, 12 hours otherwise.

The Ackbar server lived attached to our main distribution switch, so that subsequent traffic to those unused IP addresses stayed out of the server farm. We had some, er, "interesting" denial of service attacks that didn't do as much damage as they could have.

LaBrea Tarpit http://labrea.sourceforge.net/ can do this as well, though perhaps only for IPv4. Basically it looks for unanswered ARP requests and answers them. What it does with the ensuing session data is configurable.