Print

Print


We don't currently have a redundant shibboleth IdP but would like to especially since we use it for Google Apps (among other things) and are looking at other services which may use it.  Having said that I don't think I've ever had a shibboleth issue the entire time I've been looking after it (~2 years).

We too have F5 load balancers and would look to these to do stateless failover.  So, I'd be interested in knowing a little about how you've configured the F5s to do hot standby if you don't mind sharing (my currently limited knowledge assumes it's by some kind of weighting on the pool nodes?).

Andy

_______________________________
Andy Turner
Networks & Infrastructure | SLS
Sheffield Hallam University

-----Original Message-----
From: Discussion list for Shibboleth developments [mailto:[log in to unmask]] On Behalf Of John Isles
Sent: 04 October 2012 12:26
To: [log in to unmask]
Subject: Re: Resilient Shib IdPs

> So,  I'd be interested to hear from anyone who is doing hot standby 
> with their Shibboleth IdP  (i.e. if IdP1 is responding always use it, 
> if it fails the test switch to IdP2)  and what type of hardware 
> loadbalancer you're using at the front to do this.

We are doing hot-standby with two IdPs behind an F5 loadbalancer.
We decided (way back when) that using Terracotta was too problematic and we could accept that users had to re-authenticate if a failover occurred.

The F5 loadbalancer monitors the state of the live IdP and fails over to the standby when appropriate.
The failover causes all traffic, both browser traffic on port 443 and trust-fabric traffic on port 8443 to go to the standby server. These have to fail over together.

We actually use two F5s for resilience, and load-balanced ldap servers. The IdPs are VMs that can float around our infrastructure. The idea is to not have any single point of failure if possible.

F5 loadbalancers are expensive, but there are other cheaper options available.
The hot-standby solution also makes it easy to patch/upgrade the servers with no downtime.

John I
IT Services,
University of York