Pica8 - Open Source Cloud Switch


We are starting to distribute Pica8 Open Source Cloud Switches :


Especially, a Pica8 Switch with the following specifications
(including Open Source Firmware) :

-HW : 48x1Gbps + 4x10 Gbps

-Firmware : L2/L3 management for VLAN, LACP, STP/RSTP, LLDP, OSPF,
RIP, static route, PIM-SM, VRRP, IGMP, IGMP Snooping, IPv6,
Radius/Tacacs+ as well as OpenFlow 1.0

would compete with a Cisco Catalyst 2960-S, Model WS-C2960S-48TD-L for
half the price (~2k USD).

Mail : pica8.org@gmail.com

Cool story bro.

Sounds interesting. What chipset does this run on?

Also, what's a cloud switch? Is this a switch which forwards L2 traffic, or did I miss something?


* Lin Pica8 <pica8.org@gmail.com> [2010-10-18 13:27]:

We are starting to distribute Pica8 Open Source Cloud Switches :

open source? you gotta be joking.

"Currently, the Pica8 driver is released in binary form"

none of the interesting low-level drivers is open. none. zero.

Good question Nick, what is a cloud switch? Is this like VSS in cisco where you have a virtual chassis?

The vss is virtual management software for a virtual switch. This box looks like a piece of hardware that you can plug things into, so I'm just wondering what makes this a cloud switch and some other piece of kit not a cloud switch.


Because 'cloud computing' is the latest buzzword, and their marketing
department thought that by attaching that buzzword to it, that would
increase sales? :slight_smile:

Nevermind that clouds contain nothing but vapor.....

Ken Matlock
Network Analyst
Exempla Healthcare
(303) 467-4671

Has our industry ever really fundamentally defined what is "cloud computing"???

Even though "MPLS" is sort of a buzzword too, we can define it, how it works, it's protocol and such...

But cloud computing?

But cloud computing?

Yes, it is distributed high performance computing on a rainy day with
a 99% chance of marketing hype and a 100% chance of non
interoperability between clouds ... forecast may vary in your area.


My take on "cloud computing" is simply the provisioning servers or
virtual servers (say, VMWare or KVM) on the fly as needed. So you would
have a "pool" of servers. When load for one application rises, more
servers for that application are taken from the pool and added to the
mix as needed.

When load drops, that instances are removed from the rotation handling
that application and returned to the pool of free (virtual) servers.

Providers of network gear have been working on applications that monitor
the gear in the application delivery path (e.g. metrics on load
balancers) and automatically deploy instances as needed to handle that
application. This would be more of interest to providers of "bursty"
applications where they might have high load sometimes but a relatively
low "base" load. It could also be of interest to people who serve
customers in different time zones, such as the US and Europe where the
US application can be turned down at night and an application serving
Europe loaded up during their business day.

It could also be of interest for someone who is expecting a temporary
"surge" of activity. It leads, though, to a completely different kind
of attack called the "denial of sustainability" attack where a
cloud-based provider is hit with a flood of "legitimate" transactions
causing the "cloud" management to kick in more servers to handle the
additional load. If that cloud is rented, a content provider could be
hit with a huge bill.


Nice answer. Do you think cloud services is based on an oversubscription model?
Where they hope those who purchase servers don't actually max them out memory/CPU wise?

Do you also believer that cloud services should never have any downtime? To me, cloud services is synonymous with redundancy....

"Cloud" is the new mainframe i.e. "it's running somewhere else ... "

And the Emperor is naked ... :wink:

How does it compare to the OpenFlow design ideas?



Nice answer. Do you think cloud services is based on an oversubscription mo=
Where they hope those who purchase servers don't actually max them out memo=
ry/CPU wise?

Do you also believer that cloud services should never have any downtime? To=
me=2C cloud services is synonymous with redundancy....

That's an interesting question, and really points more to the fact that
"cloud" is rather poorly defined.

For example, consider the T-Mobile Sidekick Danger server crash/disaster.
This is frequently pointed to as a "failure of the cloud", but in reality,
it appears to have been trusting data to a company that wasn't exercising
proper care in maintaining its servers. People glommed onto the concept
that it was a failure of the "cloud." However, one could argue that quite
often, anytime something magically disappears into a part of the Internet
we don't have physical control over...

I've been toying with defining cloud in a different direction.

We have dedicated servers. You get a 10 GHz 24-core CPU with 1TB of
RAM. That's pretty clear and familiar to server geeks.

We have virtual servers. You get (up to) M GHz and N cores of that
same machine. Oversubscription is possible, but not required. In
many cases, oversubscription is desirable because that's where the
capex and opex savings of less hardware comes in.

In both those cases, we get tied up in the specifics of hertz and
cores and amount of memory. In the virtual server case, we make some
progress towards a model where a VM could be migrated around onto
more suitable hardware. This is useful for allowing the proper sizing
of a virtual server, for redundancy, upgrades, etc.

It seems, though, that ultimately what people seem to be thinking of
when they think of the cloud, is the ability to just have stuff "run"
without necessarily having to worry so much about the details. In
some cases, they're looking for redundancy, or reliability. In many
cases, they just want something to be out there without so much effort
on their part. They want it to run fast if it gets busy, and don't
care if the CPU is oversubscribed ... as long as they can get what
they're paying for when they need it.

I don't think cloud service purchasers will ultimately be that interested
in worrying about whether they "max out" memory/CPU. I think they don't
want to have to worry about it too much, though they probably want to be
protected from bill shock. That means a model where their server might
actually be hosted on a large host with a few hundred other mostly idle
VM's, when their VM is idle, and then get migrated onto other hardware if
demand spiked. We have technology that can even power on additional host
hardware, so there are ways to save on power/cooling during non-peak

I think you'd find such models are harder to implement if you're too
focused on the "evil" of oversubscription. I think what you want to avoid
are providers who are unable to maintain sufficient spare capacity to cope
with peak demand.

... JG

If it's based on a Broadcom chip, trust me, they are doing the world a favor by not exposing you to the SoC SDK.

(It's so horribly un-documented that it took a week to figure out how to build it and another two weeks to actually get it to build something that could be used.)

In at least one sense I think that those are the same thing.


* Ricky Beam <jfbeam@gmail.com> [2010-10-18 21:32]:


To have a better overview of a Cloud (or OpenFlow) Switch, I would
greatly appreciate to invite you to a further reading of the
presentation entitled "FI technologies on cloud computing and trusty
networking" from our partner, Chunghwa Telecom (Leading ISP in Taiwan)


Mail : pica8.org@gmail.com

Coming back to the subject of cloud impact, from a network
perspective, here is an over-one-year-old recording on the subject
(Doug Gourlay's comment: "Cloud could break the Internet if not
deployed on capable networks"):


***Stefan Mititelu

Yes "cloud computing" is a vastly overused term that is hard to nail down.
That is why I try to get people to use the specific technology term they are
talking about.

Basically in my world "cloud computing" is the vision of having computation
and storage just 'out there' without having to worry about it and just
paying for a bit more or less as you go... but its not a "technology".
Sometimes it helps to flip the term ... for example "desktop computing" is
sort of the inverse .. again its not a "technology" really either... just a
broad term but we all kind of know what it means by now, like "laptop

There seem to be too major views on technologies to make this "cloud
computing" happen. Their view on scale/transparency is considerably

a) bigger layer 2 networks with Vmware type mobility and no IP address
changes. Technolgies in this space are much more than just L2 switching, its
L2 switching on larger scales with encapsulation, multipathing etc. This is
where technologies like IEEE 802.1aq Shortest Path Bridging, IEEE 802.1ah
mac-in-mac come to play. These tend to be appropriate for existing
enterprise applications (or complete virtual desktops) and simply make
existing DC L2 fabrics bigger and availale for virtualization. No
application software changes required, its done under them and end hosts
can't tell whats happening.

b) All new applications/environments that don't care if their IP address
changes and deal with it transparently. In this model you can make an
application run anywhere, move it around etc without any special
infrastructure .. the smarts are between the application and its hosts.
Basically dumb infrastructure and smart applications a.k.a over the top
stuff. I suppose the hot movement of one of these applications requires
co-ordination by the application itself while if its done below as in a) it
can be transparent.

So depending on where you sit, generically run something anywhere v.s. run
specific things anywhere you end up with slightly different underlying
technologies... both with the overused moniker 'cloud'.

Ok .. flame away .. :wink: