We did it. Migrated the hosts/data. Production is now running in Kansas, DR in Georgia, and the old datacenters in NY/NJ are one step closer to being shut down.
Interesting couple of things I learned today.
SRDF/A is a great technology for replicating over long distances while maintaining what they call a “dependent-write-consistent” state. It means that even though the replication is being taken care of asynchronously, with minimal performance impact to the host, that in the event of a failure you’re going to lose a minimal amount of data. (In our case, when it was running the R2 disks were about 45-60 seconds behind the R1.)
We also performed a “failure” (disconnected both Gig-E ports to simulate the Kansas site dropping out) and brought the DR hardware up as primary, then reconnecting, unmounting, and restarting the SRDF/A session.
The only downside I’ve found with SRDF/A is that it’s a royal pain to stop and restart the replication. In cases like this one, where once a week they take the R2′s offline to run a 20-hour backup off them, they are putting themselves at unneeded risk. It’s a situation where TimeFinder/SNAP would be a great benefit. You snap the R2′s at midnight and back them up, thereby leaving your R2′s in sync with your R1′s for the duration. You can also then mount the SNAP volumes to a separate media server thereby avoiding having to re-configure the DR server as a temporary media server.
It’s just a thought.
It’s always a great feeling when you hit the deadline dead-on, especially when you’re dealing with a situation where the requirements keptchanging throughout the project, even to the point of having to add new devices at the last minute.
Oh well, on to the next. At least the next is going to keep me closer to home. Small-scale data migration from DMX2 to DMX2 within the same room, this should be a cake-walk.