Interview with Michel Zehnder: Archive Shuttle or traditional migration methods?
In this interview with a Quadrotech CTO, Michel Zehnder, we explore the key differences in using Archive Shuttle over more traditional migration methods.
Michel, would you tell us why Archive Shuttle is a next generation Migration Tool? What is the difference between QUADROtech’s approach versus traditional methods?
That’s a very interesting question. We’ve found that 1st and 2nd generation solutions are working much like a normal copy process just with some additional preparation.
The traditional 3-step approach to archive migration
First, you need to create new archives for the users you want to migrate and disable them on the old archiving system.
Second, create a mapping table in excel or as a CSV to map source to target archive. Next, the content will be copied to the new archive. The risk here is that because extraction and ingestion are performed simultaneously, the migration application or server itself may become the bottleneck due to every single byte flowing through it.
Last but not least you will have to process all the shortcuts in the mailboxes. All these steps have to be taken more or less manually, without the advantage of a specified workflow.
Why is Archive Shuttle better than the traditional approach to migration?
Unlike the traditional method, we take a modular approach and can spread the load over the archiving servers in a simpler way without letting the migration software become the bottleneck. In our concept, no migration content flows through the migration server which results in faster migration speeds and more flexibility in terms of planning your project.
Second, we don’t try to migrate every item in an archive in a single shot. We use a multi-stage migration approach called “sync ‘n switch” which allows us to uncouple data from user migration. We synchronize the items between source and target up-front to the user-migration. When I talk about synchronization, I include the ability to migrate deltas. You can even keep the archiving active on the source site during the migration and everything will catch up automatically.
Last but not least, we have included several workflows into the product that allows for automation of provisioning and ensures no user disruption during the migration. The shortcut conversion or deletion is performed automatically too as a part of the workflow.
Would you explain in more detail, what benefits ArchiveShuttle’s modular approach provides the customer?
Of course. As I mentioned before, utilizing ArchiveShuttle offers much more flexibility, improved performance and reliability throughout the migration.
Let me give you an example. Consider a migration from Enterprise Vault into another system, it doesn’t matter if it’s another enterprise vault system, Exchange or cloud platform.
Even in medium size environments you’ll typically find more than one server running Enterprise Vault. Each server can handle multiple vault stores. We typically deploy extraction modules to each Enterprise Vault Server. Within ArchiveShuttle we can assign exactly which module is responsible for which task which provides greater flexibility in terms of scaling the migration.
As our Modules are deployed locally to the archiving servers, we avoid additional network hops during the extraction. From a performance perspective, it’s important to get them as close as possible to the archiving backend and in the case of the Enterprise Vault example, as close as possible to each storage service of EV which is responsible for a vault store.
If you are going to migrate 100 million items, and the network/API delays take just 100 additional milliseconds for each item, we’re talking about 2700 hours or 115 days additional migration time which we can avoid via local deployment.
Last but not least, the reliability of the migration will be increased. Consider if you need to reboot one of the EV Servers, or the EV Services are down for some reason. This would only affect the particular vault server and the particular module. If you have a longer outage, you just create another module running on another server responsible for those vault stores residing on the server that’s down.
Next week, in the last of our 3 part series, we look at how Sync’n’Switch contributes to the ArchiveShuttle approach to migration.
Michel, tell us a little about Quadrotech’s ‘Sync’n’Switch’ technology? How does it work?
To explain Sync’n’Switch effectively, let’s split it into 2 stages. The synchronization/data migration stage and the switch or user migration stage.
In Stage 1, we’re starting to synchronize items in the background from the source to the target system.
Stage One will not affect the usual functionality of the source system, everything works like before. New items are getting archived according to the policy that has been set up. Users can still access all items and use the functionality of the whole system.
In the background we’re fetching data and ingesting it in the target system in a temporary archive, which is not accessible and completely hidden from the user. This way we can start to sync the items, even weeks or months before the real migration occurs.
The Sync process will fetch also deltas. That means that if I get enabled today, and all my items have been synchronized to the temporary target, but new items were archived in the meantime, ArchiveShuttle will still find, identify, and synchronize them automatically.
Stage 2 is the actual migration of the archive. In stage 2 we start a workflow that takes care of the disable process on the source, the assignment of the temporary archive to the user and the processing of shortcuts, calendar items and all other stubs in the mailbox of the user. Typically Stage two is a fast process and takes just a few minutes per user.
This Sync’n’Switch approach ensures user-facing transparency and allows you to prepare the migration without any time pressure. There are several other advantages and automation possibilities, like having the move of an exchange mailbox trigger the migration of an archive, but that would be too complex to cover in a short answer. I’m pretty sure, we’ll cover that in one of our Quadrotech TV sessions as a video.
We hope you enjoyed our interview with our CTO. In case you missed parts one and two, they can be found here: