Someone pointing their domains to my box's ip address

A couple of weeks ago I noticed a flood of activity in ngnix logs for my box’s domain. The logs were filled with thousands of requests for PDF files referred from a single domain. After a bit a research, I discovered this other domain points my server IP. When the domain isn’t found, it directs the request to my MIAB default domain.

I contacted the domain’s registrar to report the issue, while adding a fail2ban filter to clean up my logs. I received a standard form letter response, but no action to date. A few days ago a new referring domain started to appear in my logs. It is now pointing to my server IP as well.

I’m going to look into a new IP for my server, but this doesn’t guarantee a new IP without the same issues. I had been running this box for almost a year with no issues, up until a couple of weeks ago. What I’d really like to do is implement proactive measures to preserve resources. Are there any changes to nginx configuration and/or virtual host files that I can safely make ? Or any recommendations on how to better protect my box in this situation ?

BOX INFO

MIAB v0.53a
Linode VPS Ubuntu 18.0.4.5
DNS is managed by MIAB

No modifications have been made to box, other than adding the following:
webmin
fail2ban filter ngnix-badreferrals to filter referrals from these domains
fail2ban filter ngnix-4xx to filter 404
fail2ban jail to enable filters

You could make a location block for the file and deny all;. I’m not really sure, but I suspect this consumes fewer resources than f2b.

You probably shouldn’t do these:

Make a 301 redirect to the URL of the mail server of whoever refuses correct the situation.

Serve a PDF that contains an advertisement of the competitor.

1 Like

@openletter would adding a location block persist with a MIAB update ?

Depends on how it’s done. I think the best way might be to a create a *.conf file in /etc/nginx/conf.d/ and instead of using location just use server { listen 80; listen 443 ssl; server_name example.com; deny all; }. That probably isn’t quite right so you’ll need to play with it.

The other option is something like location ~ (filename1|filename2|filename3)\.pdf { deny all; } and make sure these don’t match any files you might want to actually pass through nginx. Note that ~ is case-sensitive. For case-insensitive matching use ~*.

I really like the conf file option. I guess I just need to give it a try and hopefully it survives the next MIAB update. @openletter Thank you for your help!

It should survive updates because the file isn’t installed by MiaB and usually MiaB doesn’t touch files it didn’t install. It is applied to the nginx configuration because /etc/nginx/nginx.conf has include /etc/nginx/conf.d/*.conf; in the http block.

Update - Adding a conf file to /etc/nginx/conf.d/ directory results in a security alert and the default domain is blocked. I moved the conf file to /etc/nginx/sites-available and restarted nginx. This works for the moment.

What generated the security alert?

Did you create a symbolic link from /etc/nginx/sites-enabled/?

Placing the conf file in the /etc/nginx/config.d/ directory created the security alert, but it works perfectly fine in the /etc/nginx/sites-available/ directory. (Yes, sym linked it)

I don’t have any ssl certificate paths in the server block. I’m wondering if this could be the issue in the conf.d directory ? I’m new to nginx, having worked only with Apache in the past. I’m learning as I go, so I could very well have an error somewhere.

Sounds like you’re taking the appropriate steps. The whois information of the domain should be able to get you in contact with the actual owner of the domain (the email is masked with a private one, but that should forward the email to them). Adding the vhost is probably the best option if you can’t get them to stop.

There are different ways to do what you need. You can do something like this.


Method 1: For those that don’t want/need a more customized method and just want a quick solution.

Make file /etc/nginx/conf.d/rejected-sites.conf:

# Custom config, not part of MiaB

server {
	listen 80;
	listen 443 ssl;
	listen [::]:80;
	listen [::]:443 ssl;
    
	server_name theirdomain.com www.theirdomain.com; # use the domains that are not yours
	return 444; # just close the connection; no point in generating certs for non-owned domains

	# turn off access and error logging for the domains
	access_log off;
	error_log /dev/null; # no "off" functionality for error logging. Send to null
}

# To add more domains, simply put them in the server_name directive like the example. For instance:
# server_name theirdomain.com www.theirdomain.com anotherdomain.com www.anotherdomain.com;
# Alternatively, copy and paste the server block into this same file and change the domains

Change theirdomain.com and www.theirdomain.com to the right domains.

Now test Nginx config and reload if good.

~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

$ sudo systemctl reload nginx


Method 2: For those that want to tailor things a little more and let real visitors (like the domain owner) know why they’re hitting an error page.

# Custom config, not part of MiaB

server {
	listen 80;
	listen [::]:80;

	server_name theirdomain.com www.theirdomain.com; # use the domains that are not yours

	root /home/user-data/www/rejected; # place 404.html file here. This can be the root for any rejected domains pointing to your IP
	error_page 404 /404.html; # custom error page; helps real people understand you won't be serving content here and why

    # this is used to not get a 403 forbidden error; we want the custom 404 page
    location = / {
        try_files /dontchange.html = 404;
    }

    location /404.html {
        internal;
    }

    # allow robots.txt to be accessed without physically making the file
    location = /robots.txt {
        add_header Content-Type text/plain;
        return 200 "User-agent: *\nDisallow: /\n";
    }

	# turn off access and error logging for theirdomain.com
	access_log off;
	error_log /dev/null; # no "off" functionality for error logging. Send to null
}

server { # when accessed through https
   listen 443 ssl;
   listen [::]:443 ssl;

   server_name theirdomain.com www.theirdomain.com; # use the domains that are not yours
   return 444; # just close the connection; no point in generating certs for non-owned domains
}

# To add more domains, simply put them in the server_name like the examples. For instance:
# server_name theirdomain.com www.theirdomain.com anotherdomain.com www.anotherdomain.com;
# Alternatively, copy and paste the server blocks into this same file and change the domains

Make file rejected-sites.conf and place into /etc/nginx/conf.d to not have to symlink from sites-available to sites-enabled.

Use a custom 404.html page if you want to tell people why a 404 is happening:

<!doctype html>
<html lang="en">
	<head>
		<meta charset="utf-8">
		<title>404</title>
		<style>
		  body { text-align: center; padding: 150px; }
		  h1 { font-size: 25px; }
		  body { font: 16px Helvetica, sans-serif; color: #333; }
		  article { display: block; text-align: left; width: 650px; margin: 0 auto; }
		  a { color: #dc8100; text-decoration: none; }
		  a:hover { color: #333; text-decoration: none; }
		</style>
	</head>

<body>
	<article>
		<h1>404 Page Not Found</h1>
		<div>
			<p>We don't own this site. If you're the owner, remember to change your DNS to your own IP address or to that of your hosting provider.</p>
		</div>
	</article>
</body>
</html>

Name this file 404.html and place into a new rejected directory in the www folder (/home/user-data/www/rejected)

For bots that ignore robots.txt files, put into the above Nginx server block:

	if ($http_user_agent ~* ^Baiduspider) {
	  return 403;
	}

(Just an example — I believe Baidu normally respects the robots.txt file). You might want to keep logs on to see which bots are not complying, grab those to disallow via Nginx, and then turn them back off later. You can use a different file to keep them separate from your own domains, like:

	access_log /var/log/nginx/theirdomain.com.access.log;
	error_log /var/log/nginx/theirdomain.com.error.log;

Now test Nginx config and reload if good.

~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

$ sudo systemctl reload nginx


There are other ways of doing this too. Like simply not redirecting a non-existent vhost to your default host (with Nginx, it’s whatever vhost comes first). For instance, you can simply use server_name _; to not be accessible, which means that anything pointing to your IP won’t redirect to your default domain. With certain servers, I prefer access to my direct IP, so I didn’t include this option above. This method, however, would mean not having to manually add each domain to your conf file. So it’s the quickest/easiest, but it’s pretty rare that people point their purchased domains to someone else’s IP address.

For your particular situation, all requests to any of theirdomain's URLs or files (PDFs) will simply hit a 404 page (or 444 error). Nothing will be logged in your box’s logs.

2 Likes

Also don’t forget to check the soa record. I find sometimes that email address is better monitored than WHOIS record.

I’ve also had success one time sending email to the companyname@gmail.com with a response from the founder, who created the email address long ago.

Implemented and tested Method 1. It’s working perfectly, and doesn’t appear to interfere with MIAB operation.

WHOIS contains only the registrar’s regional contact email (outside of North America) for both domains instead of the standard privacy guard or company contact. Unfortunately, my only option was the registrar.

Just checked the SOA and there is an email !

1 Like