Many of us run cacti. FYI.
Thanks for posting this, even though it's slightly OT.
Not to start an opinion war, but those who do run Cacti should
really consider removing this software from their boxes
permanently.
http://secunia.com/advisories/23528/
For those who don't have the time/care enough to go look
at the Secunia report, I'll summarise it:
1) cmd.php and copy_cacti_user.php both blindly pass
arguments passed in the URL to system(). This, IMHO, is
reason enough to not run this software.
2) cmd.php and copy_cacti_user.php both blindly pass
arguments passed in the URL to whatever SQL back-end
is used (MySQL most commonly); no escaping or sanitising
is done. Otherwise known as an "SQL injection" flaw.
There are other flaws mentioned, but they're simply subsets
of the above two. Also, register_argc_argv is enabled
(rightfully so) by default in PHP, so don't let that decrease
the severity of this atrocity. (I can forgive SQL injections,
but cannot blindly calling system()).
I'd been considering (off and on for about a year) using Cacti
for statistics gathering, and now I'm glad I didn't. This
kind-of flaw reflects directly on the programming ethics and
of the authors behind this software.
I've said this several times recently in less public places, but IMO, cacti is a bit of a security train wreck. The glaring problem isn't that the above mentioned php scripts have poor security / user supplied input sanitization. It's that those scripts were never intended to be run via the web server. So WTF are they doing in a directory served by the web server in a default cacti install? It seems to me, it would make much more sense for cacti to either split itself into 2 totally separate directories, one for things the web server needs to serve, one for everything else, or at least put all the 'web content' portions under one subdirectory of the cacti install directory, so that subdirectory can be either the DocumentRoot of a server or symlinked from elsewhere in a DocumentRoot. There's no reason for things like poller.php or any of the others that are only meant to be run by the admin from the commandline to be in directories served by the web server.
I've heard from several people, and spent some time trying to help one of them, who had servers compromised (entry via cacti, then a local root compromise) over the past weekend due to this.
NMS Software should not be placed in the public domain/internet. By the
time anyone who would like to attack Cacti itself can access the server
and malform an HTTP request to run this attack, then can also go see
your entire topology and access your SNMP keys (assuming v1). There is
this Network Management theory called Out of Band Management. If you
are concerned about security, you should only be polling anything you
expect to be secure on a private management link/network. If you want
to run an MRTG stats collector that is publicly visible and expect it to
be secure, write it yourself or purchase it from a vendor that can
support and guarantee the security of the product.
Cacti is a free open source tool, and in my opinion these should never
be expected to be 100% free of bugs, errors, and exploits. If it is
that is great. I would say you get what you pay for, but if you use
good practices around it, cacti can be a very useful and powerful tool.
That my 2 cents,
-Scott
NMS Software should not be placed in the public domain/internet. By the
time anyone who would like to attack Cacti itself can access the server
and malform an HTTP request to run this attack, then can also go see
your entire topology and access your SNMP keys (assuming v1). There is
this Network Management theory called Out of Band Management. If you
are concerned about security, you should only be polling anything you
expect to be secure on a private management link/network. If you want
to run an MRTG stats collector that is publicly visible and expect it to
be secure, write it yourself or purchase it from a vendor that can
support and guarantee the security of the product.
Sound theory. However as someone who has setup network management
& monitoring (both using open-source and proprietary software) dozen
times for multiple companies (and wrote software myself when necessary),
I can tell you that it can not work in every situation.
In particular while its correct idea to setup separate management
network for accessing devices through SNMP, the actual management
or monitoring workstation/server usually needs to be placed somewhere where its accessible from regular network, so that is exactly
how cacti is used. The correct setup would be to require SSL
connection (if its webinterface) and password authentication to
access your management/monitoring server and if it is necessary
to make data available to outside, then do it through separate
controlled interface. For example you could setup separate page
for read-only access to certain graphs using RRD files created
by cacti (and make sure CGI is not run under apache but under
its own user and that user is different then the one cacti is
using so that community strings in cacti are not available if
outside interface is hacked; note that I'm speaking really more
generally - I don't use cacti and do not know if it allows to
do it properly).
All that requires of course certain amount of security knowledge
and admin skills and sometimes even programming skills which
a lot of network administrators who choose to use cacti do not
have (in fact cacti seems so popular exactly because its easy
to setup by junior admins).
BTW - personally I use nagios for both monitoring and providing
graphing results for the data (that obviously reduces number of
SNMP queries as I do not need to do it twice) useing nagiosgrapher
with very heavy customization (I rewrote their webinterface and
parts of the library and collection), result looks like this:
http://www.elan.net/~william/nagios/printscreen_ngrapher5_nagioshost.pdf
and some bits of software as far as I had time to release it is at http://www.elan.net/~william/nagios/
Cacti is a free open source tool, and in my opinion these should never
be expected to be 100% free of bugs, errors, and exploits.
You know, above applies to commercial software just as much as to
non-commercial/open-source. In fact the theory is that commercial
software has more bugs & security flows because its code is not
available and thus can not be examined by outsiders and similarly
for the same reasons the bugs are less often found and when it is,
the details about the bug may not be made available to the public
beyond some simple "software update". Just think of how many bugs
and security updates are released by software coming from Redmond
and compare to Linux, OpenBSD, FreeBSD, etc -
If it is that is great. I would say you get what you pay for
So free software like apache are no good, right? How may security
bugs is there again found in apache and compare that to IIS?
The reality is that nowdays "what you pay for" no longer works
when comparing open-source and commercial sofware. In fact commercial
is very often just repackaged open-source supported by some vendor,
i.e. enterprise companies just get a name to put blame to is there
is an issue (plus of course support since many companies would have
bunch of junior admins and only one or two senior engineers who are
always kept very busy).
Which is rarely properly applied. I lost count of the data centers that
block mgmt traffic from external customers, but leave internal systems
(which are often "sublet" to all sorts of external customers) wide open
to mgmt servers/devices. Unfortunately mgmt systems need access to
whatever they are monitoring, so if you're monitoring customer systems
then you are more than likely exposed and should take high-priority at
tightening your NMS systems. I know, I work for a NMS vendor and I
wouldn't sign my name certifying that our stuff is secure. It's funny
how pen testing seems to avoid NMS stuff.
-Jim P.
* Berkman, Scott <Scott.Berkman@Reignmaker.net> [2007-01-18 22:34]:
Cacti is a free open source tool, and in my opinion these should never
be expected to be 100% free of bugs, errors, and exploits.
very much opposed to commercial software, where you can be 100% sure
that they are full of bugs, errors, and exploits
NMS Software should not be placed in the public domain/internet. By the
time anyone who would like to attack Cacti itself can access the server
and malform an HTTP request to run this attack, then can also go see
your entire topology and access your SNMP keys (assuming v1).
I think there are a few factors at work here:
1) PHP is very easy to learn, but deals primarily with web input (i.e.
potentially hostile).
Since most novice programmers are happy to get the software working,
they rarely ever consider the problem of trying to make it not not work.
In other words, that it always behave correctly. That problem and
assurance is much, much more difficult than just getting the software
to work. You can't test it into the software. You can't rely on a
good history to indicate there are no latent problems.
2) Furthermore, this is a service that is designed primarily for
public consumption, unlike say NFS; it cannot be easily firewalled at
the network layer if there is a problem or abuse.
3) The end devices rarely support direct VPN connections, and redundant
infrastructure just for monitoring is expensive.
4) The functionality controlled by the user is too complicated. If all
you are doing is serving images of graphs, generate them for the common
scenarios and save them to a directory where a much more simple program
can serve them.
That is, most of the dynamically-generated content doesn't need to be
generated on demand. If you're pulling data from a database, pull it
all and generate static HTML files. Then you don't even need CGI
functionality on the end-user interface. It thus scales much better
than the dynamic stuff, or SSL-encrypted sessions, because it isn't
doing any computation.
As they say, there are two ways to design a secure system:
1) Make it so simple that there are obviously no vulnerabilities.
2) Make it so complex that there are no obvious vulnerabilities.
I prefer the former, however unsexy and non-interactive it may be.
write it yourself or purchase it from a vendor that can
support and guarantee the security of the product.
Unless you're a skilled programmer with a good understanding of
secure coding techniques, the first suggestion could be dangerous.
It seems that too many developers try to do things themselves without
any research into similar programs and the kinds of security risks
they faced, and end up making common mistakes in the form of
security vulnerabilities.
And no vendor of popular software I know of can guarantee that it
is secure. I have seen a few companies that employ formal methods
in their design practices and good software engineering techniques
in the coding process, but they are almost unheard of.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
That is, most of the dynamically-generated content doesn't need to be
generated on demand. If you're pulling data from a database, pull it
all and generate static HTML files. Then you don't even need CGI
functionality on the end-user interface. It thus scales much better
than the dynamic stuff, or SSL-encrypted sessions, because it isn't
doing any computation.
While I certainly agree that cacti is a bit of a security nightmare, what you suggest may not scale all that well for a site doing much graphing. I'm sure the average cacti installation is recording thousands of things every 5 minutes but virtually none of those are ever actually graphed. Those that are viewed certainly aren't viewed every 5 minutes. Even if polling and graphing took the same amount of resources that would double the load on the machine. My guess though is that graphing actually takes many times the resources of polling. Just makes sense to only graph stuff when necessary.
Chris
Anyone thats seen MRTG (simple, static) on a large network realizes that decoupling the graphing from the polling is necessary. The disk i/o is brutal. Cacti has a slick interface, but also doesn't scale all that well for large networks. I prefer RTG, though I haven't seen a nice interface for it, yet.
Chris Owen wrote:
How large did you have to get for cacti to "not scale"? Did you try the cactid poller [which is much faster than the standard poller]?
I would say somewhere around 4000 network interfaces (6-8 stats per int) and around 1000 servers (8-10 stats per server) we started seeing problems, both with navigation in the UI and with stats not reliably updating. I did not try that poller, perhaps its worth trying it again using it. I will also say this was about 2 years ago, I think the box it was running on was a dual P3-1000 with a raid 10 using 6 drives (10k rpm I think).
After looking for 'the ideal' tool for many years, it still amazes me that no one has built it. Bulk gets, scalable schema and good portal/UI. RTG is better than MRTG, but the config/db/portal are still lacking.
Jon Lewis wrote:
jml@packetpimp.org (Jason LeBlanc) writes:
After looking for 'the ideal' tool for many years, it still amazes me
that no one has built it. Bulk gets, scalable schema and good portal/UI.
RTG is better than MRTG, but the config/db/portal are still lacking.
if funding were available, i know some developers we could hire to build the
ultimate scalable pluggable network F/L/OSS management/monitoring system. if
funding's not available then we're depending on some combination of hobbiests
(who've usually got rent to pay, limiting their availability for this work)
and in-house toolmakers at network owners (who've usually got other work to
do, or who would be under pressure to monetize/license/patent the results if
That Much Money was spent in ways that could otherwise directly benefit their
competitors.)
"been there, done that, got the t-shirt." is there funding available yet?
like, $5M over three years? spread out over 50 network owners that's ~$3K
a month. i don't see that happening in a consolidation cycle like this one,
but hope springs eternal. "give randy and hank the money, they'll take care
of this for us once and for all."
Paul Vixie wrote:
jml@packetpimp.org (Jason LeBlanc) writes:
After looking for 'the ideal' tool for many years, it still amazes me
that no one has built it. Bulk gets, scalable schema and good portal/UI.
RTG is better than MRTG, but the config/db/portal are still lacking.
[..]
"been there, done that, got the t-shirt." is there funding available yet?
like, $5M over three years? spread out over 50 network owners that's ~$3K
a month. i don't see that happening in a consolidation cycle like this one,
but hope springs eternal. "give randy and hank the money, they'll take care
of this for us once and for all."
Heh, for that kind of money you can even convince me to do it
Greets,
Jeroen
(dreams about a long holiday after finishing it
I see a reference in the response to RTG. RTG's claim to fame looks like
speed.
I've done some work with Cricket and have figured out a way to get at it's
schema. I've been looking at mating Cricket' s 'getter and schema with
Drraw and genDevConfig tools and putting a Mason based HTML wrapper around
the whole thing so people can pick and choose the components of charts they
want to see (per chart), (per page). And by filling in simple web forms, it
would be easy to generate command lines for genDevConfig to go out and
create the customized SNMP queries that are needed for Dial-Peers, Cisco's
Quality of Service, etc.
Would anyone be interested in such a contraption?
I see a reference in the response to RTG. RTG's claim to fame looks like
speed.
In comparison to RRDTOOL-based applications, RTG stores raw values rather
than cooked averages, allowing for a great deal more flexibility in analysis.
And you aren't limited to a temporally fixed window of data.
Maybe this is overly naïve, but what about the ability to auto-magically import and search various vendor SNMP/WMI MIBs? I can think of 3 open source NMS that do a good job if you set up all 3 to monitor the network, but they all overlap and none of them really do a good job.
I also am using a closed-source NMS at work that does little more than minimal on-system agent monitoring of Windows/Linux based servers (disk space cpu memory utilization).
Good graphing, good alerts, good SNMP integration, granularity, and escalation, as well as pretty executive reports to keep PHB's happy (and that display the system as 5 9's uptime no matter how many times the mail server crashed!).
The reason why the open-source tools don't work is a lack of comprehensive coverage of Cisco, third party network kit, Linux and Windows. It just doesn't quite "do it all".
The reason why the closed-source tool didn't work (in my mind) is that it just doesn't have the flexibility to deal with anything other than what it's expecting. I've submitted a few dozen support tickets with them (and they will remain nameless) simply because of a lack of SNMP knowledge on their part.
Please forgive me for all above M$ specific references, I work in a MS and *IX environment.
Andrew D Kirch - All Things IT
Office: 317-755-0202
"si hoc legere scis nimium eruditiones habes."
Maybe this is overly naïve, but what about the ability to
auto-magically import and search various vendor SNMP/WMI
MIBs? I can think of 3 open source NMS that do a good job if
you set up all 3 to monitor the network, but they all overlap
and none of them really do a good job.
Importing and searching MIBs is an interesting idea. However, for some
mibs, like Cisco's Qos and Dial-Peer mibs, sometimes wrapper code has to be
used to ferret out the appropriate groupings to use as logical entities for
displaying.
WMI requires Windows Authentication, and if one is running Linux tools,
there are issues. I havn't come a cross an easy way to get to WMI from
Linux yet. Anyone have any suggestions?
jeroen@unfix.org (Jeroen Massar) writes:
> ..., $5M over three years? spread out over 50 network owners that's
> $3K a month. i don't see that happening in a consolidation cycle like
> this one, but hope springs eternal. "give randy and hank the money,
> they'll take care of this for us once and for all."Heh, for that kind of money you can even convince me to do it
glibly said, sir. but i disasterously underestimated the amount of time
and money it would take to build BIND9. since i'm talking about a scalable
pluggable portable F/L/OSS framework that would serve disparite interests
and talk to devices that will never go to an snmp connectathon, i'm trying
to set a realistic goal. anyone who want to convince me that it can be done
for less than what i'm saying will have to first show me their credentials,
second convince david conrad and jerry scharf. (after that, i'm all ears.)
So, i've been the caretaker of a few different snmp pollers
over a few years, as well as done some database foo (250m+ rows/day
of data) and these things interrelate in a number of ways. First
start with the polling, you need to do bulkget/bulkwalk of the various
mibs to collect the data in a reasonable way, timestamp it all (either
internally before you "cook" the data), poll frequently enough to
detect spikes (including inaccurate spikes and backwards/missing counter
bugs), etc..
Take a simple set of data you might want to collect:
router
interfaces (mib)
up/down
in/out octets, in/out packets, in errors/out drops
speed (ifMIB too?)
ifMIB (64-bit counters, but only sometimes)
description
speed (interface mib too?)
mpls ?
ldp? te? paths?
mac accounting ?
then you get into do you store the raw data you collect with
markers for snmp timeouts, or just a 5 min calculation/sample? (this
relates to the above 250m rows/day) how do you define your schema?
how long does it take to insert/index/whatnot the data? how to
handle ifindex moves (not just one vendor too, don't forget that)?
how do you match that link to a customer for billing? who gets
what reports? engineering reports too? provisioning link-in? tie
to ip address db (interface ip<->customer mapping)?
the list goes on and on, this is just part of it, let alone
any possible tracking of assets/hardware, let alone
proactive network monitoring (tie those traps/walks) to the internal
ping(er) to passive network monitoring, etc..
this is a huge burden to figure it all out, implement and
then monitor/operate 24x7. miss enough samples or data and you
end up billing too little. this is why most folks have either cooked
their own, or use some expensive suite of tools, leaving just a little
bit of other stuff out there.
in a lot of ways, just buying a ge/10ge and paying some
alternate price for it may be cheaper than a burstable rate as it
could reduce a lot of this extra cost. i remember hearing that
it cost telcos more to count/track the calls to give you a detailed
bill than for the call itself. this is why flat-rate is nearly king
these days (in the us at least).
- jared