Chat with us, powered by LiveChat

Blog

Back

ArchiveShuttle 6.6 – Our CTO Answers Some Key Questions

20 Jan 2015 by olpa

This week we sit down with our CTO, Michel Zehnder, to put to him the key questions we’ve heard back from the community on the features of ArchiveShuttle 6.6 and its revolutionary capabilities in regards to the ingestion of email archives into Office365.

Michel, we’ve heard a great deal recently about how ArchiveShuttle 6.6 is offering incredible ingestion speeds for the ingestion of email archives into Office365. Is this due to how the tool handles batch processing?

In part, yes. It’s one of several features included within ArchiveShuttle that is required to efficiently ingest your email archives into Office365. While our intelligent batching process has always been a crucial component of ArchiveShuttle, we have further refined it. The process works by identifying the smaller items in an archive and batching them together before sending them to Office365. So we take a hundred smaller items, batch them together and send them to Office365 in one go. This means only one call to the API instead of one call for each item, saving bandwidth on your internet connection to provide a more efficient migration. This, combined with an intelligent response to Back-off commands and the handling of multi-threaded ingestion, allows ArchiveShuttle to offer you greater efficiency.

Can you tell us a little more about how ArchiveShuttle handles these “Back-Off Commands”?

Sure. So, if Office365 feels like its being sent too much for ingestion, it will send a back-off command to say it won’t accept any more content for the amount of time it needs to finish its task. We know how long it will take for the task to clear per mailbox, so we hold off for that amount of time. We won’t send anything to the Office365 target mailbox that isn’t necessary during this period, which frees up ingestion potential for other mailboxes that may be available. The logic within ArchiveShuttle takes the busy mailbox out of the loop, waits for it to free up, and uses the available ones in the meantime.

So, it’s a form of multi-threaded ingestion?

We use multi-threaded ingestion on many levels, such as parallel multi-mailbox ingestion. Our default setting is 3 batches per mailbox, 10 mailboxes simultaneously, with 30 threads ingesting in parallel. Whatever free threads we have, we use for other mailboxes automatically. In the event of back-off commands, we utilize any free threads for more ingestion.

Great. While speed and efficiency are undoubtedly priorities for migration customers, the need to prove a migration project is fully compliant in the eyes of regulators has become equally pressing. How do the new capabilities of ArchiveShuttle 6.6 contribute to this need?

With traditional methods of archive migration (MAPI and EWS) you have to copy all of the items, as well as their properties, from one format, to the EWS format for instance. Certain properties are not transferrable. There are certain things you can’t set on an EWS message that can get lost. From a compliance perspective, you’ve actually changed the message during the process of migration. AIP (Advanced Ingestion Protocol), a new alternative to general EWS, doesn’t have to convert properties one by one. With our approach, AIP can ensure the item arrives in the new target exactly the same as when it left, avoiding the need for reformatting while reducing calls to the EWS API. Also, the “standard” EWS batching method cannot handle items with attachments, so you can’t group them together. AIP can.

Also, we don’t have to worry about the transfer of associated elements such as journals, calendar items and tasks. There are sometimes custom items that can’t be recognised but we take the item (whatever format its in) and ingest it without the need to reconstitute it or alter it in any way when being migrated to Office365.

Click here to learn more about ArchiveShuttle