PCH.net down?

This says it's not just down for me. http://downforeveryoneorjustme.com/pch.net

Anyone else?

Loads from here (outside of Toronto, ON) - peered with them.

Seemed slow to load though..


From everywhere I've tried, it connects but loads slowly.

Also ok from France.

Lg also working fine (https://prefix.pch.net/applications/lg/), do you need to look through a specific glass?

It seems to be fine for me as well, in Australia. Loading time was a bit slow, but I do have a few downloads running.

I received the same message from http://downforeveryoneorjustme.com/pch.net
but if I go to the site directly from Miami it pulls up, but is slow to do

everyone should take careful note... downforeveryoneorjustme.com lives
... on appengine@google, so 'downforeveryoneorjustme' really just
tests if google's network has a path to it.

Very interesting - thanks for sharing that tip....


We don't know of an outage this morning, but one of our web servers has been a bit slow because someone's been multi-thread downloading our routing-table archive from it for the last week or so. That hasn't caused performance issues in the past, but now we're looking at putting the http-visible side of big things that people download on a server of their own that isn't also serving http for other tools and web-site parts.


I got there with no problem


Thats quite a revelation. I assumed it tested from all points of
the internet other than mine :^)

I suspect it simple does an equivalent of 'wget' of the hostname you
enter... the appengine api doesn't really permit much more than http/s
out I think :frowning: of course it COULD run that wget to some external
service that queries from 100k other points of light, but really... I
bet it's just doing it to the hostname you enter.


Dead again?

Web page says "Could not connect: ".

Northern Colorado, US.

Up for me in Wi.

The multi-threaded download that's been hammering our web servers is still going on. We've just turned up a new server this morning, and expect to have the bulk-download processes moved over to it later today, offloading the stuff the rest of you are trying to use. My apologies. We're getting a lot of inquiries about it, so I'll try to post back to NANOG as soon as we think everything has been separated out and is working at full speed again.