May 12 ~ 13
Possible momentary outages
with error HTTP 500
Hosting provider's fault - "planned" maintenance for "only some" of the servers,
but somehow all of our 5 servers were affected.
HTTPS has been added.
You can still use HTTP if you want (e.g. if your app can't make https requests).
Apr 1, 3:15:10 a.m. UTC
5 billion requests served - total give:take ratio at 1362k:1.
of service - total give:take ratio at 1330k:1.
Thank you for your trust! Tell all your friends :)
Consider donating $1 for each year ip2c was useful to you ;)
Nov 28, 5:42:54 p.m. UTC
4 billion requests served - total give:take ratio at 1128k:1.
Jul 7, 11:28:45 a.m. UTC
3 billion requests served - total give:take ratio at 881k:1.
May 1 ~ 16
Transparently transitioning to better servers, should yield +50% load capacity.
9 years of service - total give:take ratio at 690k:1.
Also PI day
Jan 20, 3:16:22 a.m. UTC
2 billion requests served - total give:take ratio at 618k:1.
Dec 16, 6:22 p.m. ~ 2:08 a.m. UTC
Filesystem problem on 1 of our 3 servers, might have caused malformed responses to ~10% requests.
Sep 12, 9:45 a.m. ~ 3:35 p.m. UTC outage
Hosting provider's fault, possible momentary outages on 1 of our 3 servers. Poland...
They say the cause is datacenter-wide DDoS from competition.
Sep 6, 9:25 ~ 10:05 a.m. UTC outage
Hosting provider's fault, datacenter-wide outage.
Jun 13, 4:02:50 p.m. UTC
Billionth request served - total give:take ratio at 332k:1.
Added header: Access-Control-Allow-Origin: *
Now it should work with Ajax/XHR. This feature was overlooked for too long...
May 31 outage
Today we had a total of about 50 minutes of partial outages around 2 and 4 p.m.
Our hosting provider rolled out updates for 500k of their clients, without testing.
Stat functions have been rewritten and are up-to-date again.
Design has been modified to support pooled servers. Currently running on 3 apaches in round-robin.
This is due to hosting environment not supporting nginx and php-fpm anymore.
Some stats can be out-of-date until we rewrite them to fit the new design.
8 years of service - total give:take ratio at 277k:1.
Added live graph to Stats
page. Bored engineers in da house.
Added Users of the Month
Fun fact: French users exhibit EMF (End of Month FeverTM
Added a try-it form to About
page - for newcomers.
Added users & countries to Stats
Server software upgraded:
ubuntu 10.04 → 14.04 | apache 2.2 → nginx 1.4.6 | mod_php → php-fpm | php 5.2 → 5.5.9
Tested at ~820 reqs per second, limited only by uplink bandwidth. Have fun overloading it.
Jul 7 ~ Jul 13 outage
Segfault in Apache occured on Jul 7, 5:35 a.m. UTC, daemon somehow failed to restart it.
We haven't noticed this until Jul 13, 5:45 p.m. UTC, being preocuppied with other professional affairs.
for this epic fail. Watchdog has been implemented.
A record has been set! 16.7 million requests served in one day!
ip2c.org is 7 years old \^~^/ with total give:take ratio at 192k:1.
Over the last two weeks we inquired 9 times about proof of alleged server overload (2 phone calls, 7 emails).
They failed to produce even a single fact, of course.
Instead they suggested 5 times that we should buy a more expensive hosting plan.
Oct 30, 1 p.m. ~ 12 a.m. UTC outage
Flush your DNS tables and restart your apps - we had to change server IP!
Our hosting provider home
.pl blocked without warning
our entire server in order to extort
They stated that ip2c.org generates too much load and destabilizes their infrastructure.
Note that home
.pl is the biggest and the most advanced hosting provider in the country,
whereas one IP lookup takes 3 io (ram cached), lasts about 1.0ms, and prints between 13-46 bytes + http.
Too much load my ass.
They hadn't actually noticed the last 200,000,000 'destabilizing' requests for months
until we contacted their support on Oct 28 and politely asked about their cpu load limits etc.
At that moment, suddenly, a catastrophe of melting datacenters revealed itself,
and they were forced to take swift action... against a free
Their proposed solution was 'buy a more expensive hosting plan'.
Fuck you home
.pl, you are officially relieved of this gruelling duty.
Everything is back online 12 hours later, now featuring hardcore thunderous 2-core 1gb ubuntu box
- owned by us, no 3rd party involved, no conflict of interest.
13 months and 264 million requests later, our give:take ratio has gone from 37k:1 to 141k:1 (6.5 year total).
Last month alone is 1.3m:1.
Domain renamed to ip2c.org
GeoLoc.daiguo.com still works for backwards compatibility.
Added new input
notations based on mod_rewrite. This upgrade is minor but long due.
We have seen around the Net people assume this service simply forwards requests to Software77 servers.
In reality, since day 1, this service resolves requests locally, and calls Software77 once a day for updated data.
In 5.5 years we have accessed Software77 servers about 2k times, while producing 74m+ answers on our own.
We are actually relieving their servers of load.
changed to LGPLv3
making our service more usable.
As IPv4 space comes close to its limit, we will deliver an IPv6 option soon.
We are still here, no major hiccups in 2.5 years. Hope it serves your needs properly.
Core is redesigned and rebuilt from scratch
Databases are no longer in use, seems they were too bulky to operate.
New solution features RAM-based indexed binary search.
Update process has been reduced from over 60 minutes to under 10 seconds.
This makes GeoLoc accurate within 2 hours after new data is published by Webnet77.com,
instead of 17 hours as it was until yesterday.
Starting 1 a.m. UTC something is really wrong.
Out of 96 database update threads 34 crash during 31 hours' period.
It means extreme overload generated by those other websites.
Results in presumably over 30% incorrect answers by GeoLoc.
Nov 22 ~ Dec 20
Server shows some strain from time to time. Not related to amount of requests, however.
There are 80-90 other websites hosted virtually on the same machine as this service,
they probably start getting more and more visitors - which results in shared overload.
for the inconvenience.
Added load distribution (4 parallel databases)
If flooding takes place again, we will
add automated blacklisting.
This is public service after all.
by some mad script from 188.8.131.52 (France)
150,000 requests/day from a single IP is a bit too many.
Anyone responsible for this - please revise your loops.
- consider caching retrieved results for at least one day.
'Cookie' your clients wherever possible, IP data changes slowly enough to be updated just once a day.
If you feel like killing your server with thousands of IPs per second,
download a csv from <http://software77.net/geo-ip>
GeoLoc.daiguo.com says Hello World!