Pitfalls of Cloudification and What They Don’t Tell You About SAS Cloud Migration

Did you know that the majority of cloud migration projects fail? IPulse reports that the statistic is around 44-57%. TechRepublic reports that at least 55% of the migration projects were underestimated in time and resources. While our previous articles discuss the benefits of cloudification at length. As the old saying goes, if it is too good to be true, then it probably is. We would be remiss if we only offer one side of the story.

In this article, we disclose where most cloudification difficulties stem from. We dive into the subtle technical details that differentiate an off-premise system from an on-premise one that an inexperienced project team may miss out on. The result of which can have massive impacts on timelines, performance, security, and costs. Finally, we share lessons we’ve learned by migrating many SAS customers to the cloud.

Read on to learn how you can proceed with your SAS migration with a better strategy and increase your chances of success.

#1: “Differences in cloud hardware could make performance worse and more costly than on-premise.”

One of the reasons many organisations migrate to the cloud is to take advantage of enterprise-grade server specifications. Ideally, the higher specifications of cloud-hosted infrastructure will make an organisation’s workload run more efficiently.

Cloud infrastructure is designed for scaling horizontally rather than scaling vertically. In most cases, this is the more economical approach. The hardware components of machines that belong to the cloud reflect this general strategy. To increase performance, one adds machines rather than increasing the capacity of a single machine.

If the differences between on-premise and cloud hardware are not considered, the servers may be misconfigured and result in unsatisfactory performance. The following hardware components are considered when assessing a system: CPU, Memory, Network, and Disk Usage.


Some virtual machines run at lower clock speeds than on-premise servers. Many CPUs used in the public cloud have as many as 24 logical cores. This is intentional, given the horizontal scaling strategy and factors such as cost and heat control.

In Azure, for example, compute-heavy servers run 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake) processors and Intel® Xeon® Platinum 8168 (Skylake) processors. These processors have a CPU frequency of 2.60 GHz and 2.70GHz, respectively. This CPU has 24 cores and supports 48 threads.

A laptop or desktop machine with a 10th generation i7 processor has a 3.6Ghz base CPU frequency but typically supports up to 4 cores of 8 threads. In this case, an on-premise device may outperform the Cloud-based one for single-threaded applications. For many applications, a workaround for slower clock speeds is to increase the number of cores. However, this can be particularly tricky for SAS 9, where many procedures are single-threaded. SAS Viya procedures, on the other hand, have been redesigned to take advantage of multi-threaded environments.

Clock speeds and limitations of SAS related to multi-threading should be considered when sizing the CPU requirements.


When designing your cloud infrastructure, the frequency of data access should be considered. Frequently accessed files can be stored in faster, hot storage. One can select warm storage for less frequently accessed files, while cold storage can be considered for archival purposes.
Analytics requires faster disks when running batch processes to ensure that the computations are completed in a given time frame. There are additional factors to consider on top of using hot storage for analytics.

On-premise disk performance is not linked to the size of the device. For example, a 100GB SSD device usually performs as fast as a 1TB SSD device. In contrast, in the cloud, the performance of a device increases with its size. It is not enough to merely buy the size you need. You also purchase performance by adjusting the size accordingly. Some disk configurations come with “performance credits”, that is, they perform well while they have credits available, then their performance drops.

Another difference of disk storage in the cloud is contents are wiped out when a virtual machine is powered down on ephemeral drives. When a cloud server is restarted, a new onboard SSD device is acquired, which means any data stored on the device is irrevocably lost.

One of our clients shared an instance when their virtual machine was restarted for maintenance purposes, and their data could not be recovered. Because onboard SSDs provide consistent excellent performance but store data temporarily, they are well-suited for hosting temporary SAS data locations such as WORK and UTIL areas.

Project teams with the relevant cloud experience will know to consider these and can avoid the profound cost implications for your organisation.


The rule of thumb for network usage is, transferring data into the cloud is free, but downloading data is not. Unlike on-premise infrastructure, where you can move data freely without cost implications, moving data must be planned carefully to avoid unnecessary costs.

Bandwidth requirements are usually not a concern for on-premise systems. However, in the cloud, moving data to the proper location in a timely manner is essential for ensuring your platform’s responsiveness. To illustrate, one of our clients observed that a procedure took so much bandwidth that other users could not access some of their systems.

Data movement and data latency can lead to significant delays. If your analytical processes require a lot of on-site data, your batch length could be significantly increased without sufficient bandwidth. Delays like this can substantially reduce the benefits of moving to the cloud.


Memory allocation is directly proportionate to other types of resources, such as CPU cores. Concretely, SAS9 requires the following for on-premise workloads:

“Since SAS Viya is an in-memory processing system, the CAS controller and worker nodes require not only an ample amount of memory but balanced memory configured with speed to uphold the highest performance.”, says Tony Brown. He recommends at least 2x the RAM size of your current on-premise workload. If there is significant data manipulation, 3x memory should be allocated. Minimum memory speeds of 1600MHz are recommended. CPU and Memory speeds should be as closely matched (1:1 ratio if at all possible)

These are just some of the nuances that differentiate cloud-hosted and on-premise infrastructure.

#2: “There are some parameters or features that can make your batch run more efficiently.”

Similar to on-premise devices, there are many parameters or features that one can use to tune your cloud-hosted server’s performance. An experienced project team can help you configure your virtual machines to get the most out of them when running SAS.

Superfetch (for Windows)

One such example is Windows Superfetch. Superfetch is designed to make your device run more efficiently by pre-loading frequently used applications and data. By analysing your usage, it can cache necessary data, programs, libraries, etc., that may be necessary for running your application before it is needed. Data can be prefetched when the program is launched or upon boot up. There is also an option to prefetch everything.

Superfetch can make many business applications run faster, including SAS programs.

Other Caching Techniques

Moving to the cloud makes other caching techniques available. For example, Azure supports read and read-write SSD backed caching techniques. This means available cache can be doubled or tripled, improving performance while avoiding procuring expensive memory typically used by systems like Superfetch.

#3: “Faster and highest performance is not always best.”

In an ideal world, one would purchase the fastest, largest, most cutting-edge platform possible. However, most organisations, if not all, have budgets to work with. It depends on your business use case. Your cloud infrastructure should be designed in such a way that you maximise the benefits while minimising the cost.

The experienced project team should start with establishing the current load on the existing platform and business requirements. The flexibility of the cloud should be used to meet the existing needs at the lowest cost possible while being flexible for future growth in requirements.

#4: “It may not be the right fit for your organisation.”

In one of our projects, a preliminary assessment revealed that the client would need to push 80GB worth of data to the cloud on an hourly basis. However, because their existing network bandwidth, it would take three hours. Scenarios like this are not optimal for cloudification unless a faster link is established.

This is where the concept of data gravity becomes relevant. In manufacturing, where a lot of data is generated on-site and consumed on-site, hybrid solutions can be considered. In such cases, it makes more sense to keep the processing close to where it is generated. Cloud-based applications and services move towards massive data stores rather than the other way around.

“For a cloudification project, architecture is everything.”

If you notice, many of the issues arise from design concerns. The devil really is in the details.
It is essential to understand your off-premise requirements before moving your workload to the cloud. This will lead to better decisions in designing your infrastructure.

How to address design issues

Work with a team with cloud experience

AWS and Azure offer infrastructure that is suited to different workloads. You can select what meets your requirements, test and adapt as necessary. This can be done via thorough pre-migration assessments that identify bottlenecks and future requirements. If your workforce does not have the necessary skills to make such assessments, working with partners will help you augment the skill gap. MySASteam, for instance, has helped increase the speed of the SAS application processing for an insurance company by 20% when compared to an on-premise configuration just by selecting the most suitable VM type.

Partners also have assessment tools to identify possible issues in your existing code and recommend ways to optimise it. MySASteam has solutions such as SAS Healthcheck and SAS Monitor that automatically estimate resource requirements and track issues in SAS code. This automated toolkit helps you identify from embedded passwords, hardcoded network/disk paths and the levels and performance of disks required to make your migration a success.

These tools cut down on the learning curve and allow your organisation to realise its return on investment faster.

Ensure proper project management

We can’t stress enough how important it is for all the stakeholders to be on the same page about a migration project. A proof of concept (POC) helps make the pros and cons of the project more palpable and can often highlight issues that were not anticipated. Many public cloud providers such as Azure or AWS, often finance PoC studies when carried out by an experienced Cloud Service Provider, further reducing the financial cost of the PoCs and assessments.

The majority of cloudification projects go beyond the expected timelines and budgets. Managing the scope and implementing the migration in phases, addresses this problem and allows your organisation to realise the benefits sooner.


Cloudification projects don’t typically fail because of incorrect implementation of business rules. More often than not, it’s the performance and the cost of system/code refactoring that causes the projects to fail.

Test cases should include performance metrics similar to the extensive list found on SmartBear. Because of the elasticity of the cloud, it is also a good idea to create testing scenarios with the exact production environment and ones that are different. By performing more comprehensive testing before the full acceptance of the cloud solution, many unforeseen issues can be caught and addressed.


Despite the difficulties encountered in migrating your data analytics platform to the cloud, the benefits still outweigh possible risks. Hopefully, this article has allowed you to appreciate the nuances of cloudification so that you can minimise your migration project’s chances of failing.

Infrastructure design is critical to your success, and if your organisation does not have the necessary skills in-house, cloudification can be very daunting. It’s best to work alongside a partner to avoid costly mistakes.

96% of organisations are moving to the cloud. For many, it’s a matter of when not if. We invite you to get in touch with us to discuss your project, free of charge and with no strings attached.


The post Pitfalls of Cloudification and What They Don’t Tell You About SAS Cloud Migration appeared first on MySASteam.

Get in touch to discuss your

project free of charge

Let’s Get the Conversation Going

Leave your details below so we can help find the best solution for your organisation.

    Let’s Get the Conversation Going

    At MySASteam, we believe that every SMB deserves the Power to Know. Please leave your details below so we can find the best solution for your organisation. We’ll call you.