I use Miab’s dyndns functionality to point a subdomain to my home server, which runs Nextcloud. The IP address of the home server changes at least once per day and is updated every ten minutes. This worked for some years (just realized how long Miab already exists!). Now, however, I could not connect to my home server anymore although the IP on the admin page is correct. I ping’ed the subdomain and the ping goes to a different IP. I debugged this issue a bit and created some other subdomains, all pointing to the home server and all but my old subdomain work as expected. The IP addresses for these subdomains on the admin page are all identical but the IP addresses that are displayed for the ping differ. It seems that the old subdomain still points to an old IP.
All status checks are OK. Any ideas what might have gone wrong?
P.S. I am running v0.52.
Im assuming that you use the curl dns API to update these. Do the requests to such API fail? Did you enable 2FA?
Yes, I use the curl api. The requests do not fail. I did not enable 2FA.
I have never used this api, so I have no idea what could have gone wrong - try to run
sudo journalctl -xe | grep nsd on your box. What do you see?
sudo journalctl -xe | grep nsd gives no output although the problem persists: The IPs on the admin page are all identical, the IPs I get from ping differ.
Just in case it matters, this is the command I run on my home server:
curl -X PUT --user email@example.com:password https://box.mydomain.com/admin/dns/custom/cloud.mydomain.com
The only difference between the subdomains that works and the one that does not work is the last part:
What is the reply from your curl when you run it manually instead of script?
I have custom scripts running as well but the URL I use is e.g.
curl -X PUT --user firstname.lastname@example.org:password https://box.mydomain.com/admin/dns/custom/cloud.mydomain.com/Type
Where is the type of record e.g. A, Cname, AAAA
The output of the curl is ‘OK’ for all subdomains. The type is ‘A’ but I did not use it explicitly since it is optional. The documentation says: “Defaults to
A if omitted.”
I have now created a cname entry for the problematic subdomain that points to a working subdomain. With this setup, the home server can be reached. Although this cannot be considered a fix, it is at least a bypass. Nevertheless, I’d like to understand the cause of the problem.
If I understand the code in dns_update.py, the custom dns entries are stored in
/home/user-data/dns/custom.yaml. After I created a new entry it shows up in the file and when I delete it via the web interface, the entry is gone. However, even after a restart, the subdomain is still resolved to the IP that I originally set. So the question is: Where are the custom dns entries stored in addition to the
it’s inside /etc/nsd folder. shouldn’t edit it directly, however.
Unfortunately, my workaround worked for exactly one day, until the IP changed again. Now also the previously working subdomains resolve to the wrong IP (yesterdays IP) although the admin page shows the correct IP.
@daveteu mentioned the /etc/nsd folder. The subdomain entries there correspond to the desired IP but the ping still shows the wrong IP. Also if I delete a dns entry via the admin page, the entries in the custom.yaml and the /etc/nsd folders vanish immediately but the subdomain is still resolved. Do you have another idea what might go wrong?
You mentioned using CNAME’s to work around the issues - do you by any chance have multiple records in a subdomain?
OK, it seems that this problem lies - as usual - on the other end of the line (not sure if the computer and/or the one sitting in front of it is the problem). My computer (and the home server that I used for debugging) seem to cache the result they get from Miab’s nameservers. I have now used a third PC and the results differ for pings to the same subdomain for different PCs. Sorry it took so long for me to figure this out!
Is this something that can be disabled by the nameserver (Miab) or is it totally up to the client to re-request or cache the IPs? What I don’t get is why this exact setup worked so well for so long and stopped working abruptly. And it stopped working for different clients! Since I just upgraded to v0.52, it is tempting to assume that something changed there. Could that be the case?
It explains a lot now - that means that the default TTL for records is now a full day (this is, the records are cached for 24h) - as such when you update it, it can take up to 24 hours to propagate to all servers! However, by the time the 24h go by, it won’t be long until your IP changes AGAIN, and you end up essentially in a cat & mouse chasing game
That explains it. Thanks! nordurljosahvida suggested a checkbox somewhere to “use low TTL for NS records [advanced - only use this if you know what it means and really need it]” or to make a script under
management directory that simply lowers the TTL to 5 minutes, to give migrating users an option to prepare for the migration in advance. I guess none of the suggestions were actually realized, right?
So, is there anything I can do about it? That was one of my favorite features. And I can hardly believe that I am the only one with this problem.
Hopefully, I have now temporarily solved the problem by changing the TTL-settings in
management/dns_update.py from 86400 back to 1800. I don’t know if that is sufficient without a reboot since I just rebooted right away (old Windows-reflex). And now I’m back to the old setting and everything works again. Since the change won’t persist an update, I think about creating a pull-request in order to put the TTL into the