11 Dec 2019 by Mike Weaver
Integration: The Final Step in Change Management
The final step in successful change management is the Integration stage. Here’s how to bring everything together. Watch now.
In the first part of this series, we explored the reasons to migrate to Office 365, followed by a discussion of the common mistakes made during migration.
Moving on to look at some of the technical prerequisites you should be aware of before you begin a migration, part four of the series talked about how to manage data volume. We then looked at interdepencies and data elimination, which led us onto our most recent post around selecting third party products and services.
In this blog post, we’re delving into the actual migration process and taking a closer look at what to expect when you begin your migration project.
Office 365 migration from end to-end
In our view, the best way to provide an insight into the steps involved within a migration is to provide a walk-through of a typical workflow – the clear process for running an email ecosystem migration into Office 365.
At a high level view, there are two basic migration methods from traditional Exchange environments – cutover and staged:
Whilst both ‘active’ options, even if your current Exchange Server supports them, large cutover migration projects can be logistically very difficult to perform. What’s more, many experienced Office 365 consultants consider the practical limits of both cutover and staged migration methods to be just 150 live mailboxes.
Many organisations carry out hybrid migrations, involving both on-premises Exchange and Exchange Online, which rely on directory synchronisation. This is a suitable option for organisations that need to prioritise business continuity above all else, minimise disruption, and/or need to take more time over the process, as it is possible to operate with a hybrid configuration for as long as you require.
Yet for real flexibility and a full transition to Office 365, a third-party ‘sync ‘n’ switch’ approach is generally considered the most preferable choice.
The main reason behind this is that the migration takes place in the background, and can be tested without affecting accessibility or productivity – users are seamlessly switched over to Office 365 when the environment is fully prepared.
Control and reporting
As you can see from the diagram above – every part of the migration process depends on adequate preparation. Once a migration is underway, as with any project, it’s primarily a matter of keeping everything on track to see the required end results. For many organisations, this itself isn’t the problem – the real challenge is understanding what you have in your current environment and deciding what to do with it.
As noted in part three of this series, the overall programme needs to be governed by a comprehensive information management policy and schedule. This may already exist, but it’s commonplace to have it reviewed and enhanced.
Regardless of the technology imperatives, legal and compliance teams regularly insist all items requiring retention are migrated, primarily as they present risks in their current state.
For this reason it’s vital to ensure there are no current or pending lawsuits that could affect an archive that is to be migrated. Similarly, relevant data must be found and secured as soon as possible, and it must also remain in its native state as of the date it was placed on legal hold.
Technical planning, visibility of progress and an audit trail should be maintained from the outset. This can be achieved using manual methods, however, a console-based solution that’s directly linked to live tools is preferable. This is largely because it ensures:
An automated solution or process enables you to report to stakeholders and all relevant lines of business more effectively. It also allows the operation of a unified helpdesk for users affected by the migration.
What’s more, as we covered more extensively in part three, technical issues (such as performance and capacity of existing servers and networks, backup windows and permissions to access data) need to be taken into account during planning.
According to best practices, the general scale, type and location of data to be migrated should have been identified in the early planning stage. If this is not undertaken, or carried out to a sufficient extent, it is impossible to get a clear idea of the shape and size of the environment you plan to migrate – and ultimately, where everything should go.
If this is not completed beforehand, then once the process is underway, you could be constantly dealing with ‘unknown’ items, and being prompted to decide where items should be placed ad hoc, which is not at all ideal, nor time/ resource efficient.
Once the migration is under way, migration agent tools should interrogate the email ecosystem (local disks, shared drives, USB sticks, central storage and multiple Enterprise Vault archives, for instance) to gather details on mailboxes, public folders, PST files and archives. These tools should also identify the relationships and dependencies between the data.
The toolset being used should allow users to set options for which operations should be performed on what email. For example, this may cover include/exclude options, as well as procedures for dealing with orphaned, corrupt and other item exceptions.
Similarly, password removal can be automated, and the system should be expected to manage bandwidth control – and therefore the timescale of the process.
Filtering, deduplication, and reconciliation of data – particularly for PST files – does not feature in all toolsets. For this reason the system should understand how different mail systems and archives store data, and as a result use appropriate APIs and conversion techniques.
The most sophisticated tools use various policies for user types based on:
With this information, the transfer speed for each individual user and location can be managed.
There are a number of ingestion protocols provided by migration vendors, but these can inflate the data that needs to be ingested by up to 300%. The effect of this kind of inflation can be extremely detrimental for bandwidth and speed.
A range of proprietary specialist ingestion protocols are able to lighten server and network loads, and can provide substantial increases in speed.
Although most of the process is automated, there are still instances when manual intervention is required. A prime example is if an item doesn’t meet minimum thresholds, it should be blocked from ingestion until a specialist has reviewed it.
Testing, switchover and clean-up
By using a ‘sync ‘n’ switch’ approach, it’s possible to verify the integrity of migrated data before triggering the switchover from the old system. This means users can carry on working as normal, without interruption.
Clean-up can be performed either before or after ingestion – the latter being preferable from a resource management perspective. At all times, however, clean-up needs to be a defensible process in which the organisation can demonstrate:
We hope that after following this blog post series and understanding the main steps involved in an email migration to Office 365, you feel more prepared and ready to begin moving forward with your migration.
To further ensure you have all the information you need, our next – and final – post features a useful self-test questionnaire to help assess how ready your organisation is to begin a migration project.
Cogmotive is the leading global provider of enterprise level reporting and analytics applications for Office 365. Find out more now.