Multi Cloud
How to Leverage the AWS Cost Optimization Pillar
Explore the Cost Optimization pillar of the AWS Well-Architected Framework and gain best practices for designing processes that make it possible to go to market and optimize costs early on.
Related articles in the Well-Architected series:
- Operational Excellence
- Overview of All 5 Pillars
- Security Pillar
- Reliability Pillar
- Performance Efficiency Pillar
- Sustainability Pillar
More often than not, businesses make strategic decisions to get services up and running as quickly as possible, dealing with the control of costs after the fact. When the pressure is on to go to market early, this seems like the right thing to do, however, you may end up with the costs outweighing the benefits.
To help avoid this, Amazon Web Services (AWS) has outlined best practices for building in the cloud, creating pillars that, if adhered to, will enable a well-architected framework. One of those pillars is Cost Optimization. Adhering to this specific pillar will make it possible to go to market and optimize costs early on—the best of both worlds.
Within the Cost Optimization pillar, they defined five design principles that should be followed. If these are applied to a cloud environment, especially from the very beginning, it is possible to have the cloud you need and spend the appropriate amount of money on it.
Interested in knowing how well-architected you are? Check out the free guided public cloud risk assessment to get your own results in minutes.
Five core design principles for Cost Optimization
Too many businesses have gone down the path of paying more for their cloud environment than they anticipated or that was appropriate. Using these five core design principles, you can avoid taking that journey and the not-so-fun-chat with your CFO.
1. Implement and invest in cloud financial management. This will take time, people, and processes to learn to manage costs for cloud services effectively, but the money spent here will be well justified by the increased savings seen.
2. Adopt a consumption model, only paying for what you use and being aware of environments that are inactive. For example, if you are not using a test environment off hours, then power down those systems.
3. Measure overall efficiency through careful tracking and analysis of where money is being spent and how that investment is benefiting the business.
4. Stop spending money on undifferentiated heavy lifting. Leave the capital expenditures to the cloud provider. Let them buy, build, install, and manage the physical equipment. Your focus should be on the operations and services or applications needed to serve your customers.
5. Attribute the spend of those services to the corresponding business units or project through tags when dealing with projects using the service. This allows for easier return on investment (ROI) calculations.
Practice cloud financial management (CFM) in your Well-Architected Framework
CFM is the practice of ensuring money spent on cloud services results in the benefits needed by the business. There are nine best practices for CFM defined by AWS:
1. Establish a cost optimization function to help generate a culture of cost awareness. It is essential to have an executive sponsor for this role to champion cost ownership. When searching for the right person or team for the function, you’ll want to look for those with proficiencies in data science, project management, software development, financial analysis, and infrastructure development.
2. Create a partnership between finance and technology, as there are always changes in financial management when moving to the cloud. Traditionally, a lot of analysis is done on a project plan to determine outcomes and benefits before you are granted permission to spend money on the equipment needed. This is known as a capital expenditure (CapEx). Equipment is usually purchased with an expected lifecycle of three to five years. Cloud services are categorized as a different expense, known as an operational expenditure (OpEx). Without a good relationship between finance and technology, the cloud could end up costing you a lot more than a traditional data center—without the benefits.
3. Forecast cloud budgets. You can use trend-based algorithms and/or business-driver-based algorithms to understand future costs and plan appropriately. AWS has a tool, AWS Budgets, to assist you in working through forecasting, but as with all tools, it is critical to configure and use it appropriately. This is where Trend Micro comes in handy, as it automatically monitors AWS Budgets with rules from the Conformity Knowledge Base. This helps to ensure that you have the best configurations in place to make the most of all tools.
4. Ensure cost awareness in your processes. Whether it is a new or current process in the organization, steps need to be taken to ensure they are cost aware. If you come across processes in the organization that are not, try to modify the process rather than replacing it with a new one. By modifying, rather than implementing new ones, you can reduce the impact to the existing speed and agility development/deployment processes are exhibiting. Here are a few tips:
- In order to proactively address cost concerns, a quantification of monetary impact should be added to change management to help with budgeting.
- Existing operating capabilities, such as incident management, should be extended to include cost optimization. That way, if there is a cost overage, you should be able to use existing incident management people, tools, and processes to identify the root cause and address that overage appropriately.
- When discussing the cost of implementing a cloud service, look at the costs from a return on investment (ROI) perspective.
- In an ideal situation, departments and teams should be discussing cost on a regular basis. An effective way to achieve this is by including cost optimization in your organization’s awareness and training programs.
- Reports on cost optimization should be generated, read, and responded to, as necessary. Tools like AWS Budgets, AWS Cost Explorer, and AWS Quicksight, can help facilitate your reporting.
5. Create a cost-aware culture by starting small. Introduce cost optimization and CFM in a decentralized manner—one project at a time, one team at a time. Some good recommendations from AWS to achieve this are:
- Gamify cost and usage by creating a dashboard that displays costs per team. Make this visible to the business, so teams that are reducing and managing cost the most efficiently can be recognized, rewarded, and learned from.
- Reward voluntary cost optimization accomplishments publicly. A public discussion of cost-efficient management helps facilitates the continued change in culture.
- Implement requirements from the top down that will ensure cloud workloads are designed and built in a way that fits within the pre-defined budget.
6. Quantify business value delivered through cost optimization on a per business outcome. Conformity has many rules to assist with this, such as removing any unused DynamoDB tables, identifying and removing unattached Amazon EBS volume stores, ensuring Amazon Elastic Compute Cloud (Amazon EC2) reserved instances are regularly reviewed for cost optimization.
7. Keep up to date with new service releases. Regularly consult experts or AWS Partners to explore cost-effective services. Stay updated via AWS blogs and informative sources. Leverage new AWS functionalities to accelerate innovation. Integrate services to improve cost efficiency. Stay informed by reviewing resources like AWS Cost Management, AWS News Blog, AWS Cost Management Blog, and What’s New with AWS?. These provide summaries of service, feature, and region expansion announcements for optimizing your AWS environment.
8. Create a cost-aware culture. To foster a culture of cost-consciousness within your organization, initiate and execute changes or initiatives across the board. It is advisable to begin with smaller-scale endeavors, gradually expanding as your capabilities grow and your organization's adoption of cloud technology expands. Ultimately, implement comprehensive and extensive programs to effectively promote cost-awareness throughout the entire organization.
9. Quantify business value from cost optimization. By quantifying the business value derived from cost optimization, you gain a holistic understanding of the numerous advantages it offers to your organization. Since cost optimization requires an investment, quantifying the business value enables you to articulate the return on investment to stakeholders. This quantification not only helps secure greater support from stakeholders for future cost optimization initiatives but also establishes a framework for evaluating the outcomes of your organization's cost optimization efforts.
Three key expenditures and usage awareness pointers from AWS
Cloud services are typically used by multiple teams, spanning across different departments throughout the organization. Tracking costs per team is critical to being able to understand how much the services your team has built or subscribed to cost and the benefits the business is achieving as a result. AWS recommends a multi-faceted approach to understanding your usage and the cost expenditures, defining three key factors to look at.
Control needs to be established from higher levels of management in the business, starting with organizational policies. These policies should identify the requirements for building and managing cloud workloads.
It is also critical to create goals and targets for business growth in the cloud. When these are clear, it makes it easier to determine if the expansion of cloud services is going in the right direction financially. It is highly recommended that you use AWS Budgets to facilitate your budget for cloud environments.
AWS allows you to have one primary account that applies to the whole business; however, you may create multiple secondary accounts so each team can manage their own costs. The secondary accounts can also be created to control and restrict the flow of information between different accounts. AWS Organizations is a great tool to help generate this structure.
Within each account, you will create individual users, assigning different levels of control based on their needs. Whether they are root users with full control over the account or users with more limited access. AWS Control Tower is another service that aids in the configuration of multiple AWS accounts that you may find useful to explore.
When creating individual user accounts, it is best to assign those account to groups and roles. AWS Identity and Access Management (IAM) is a great tool to enable you to manage users and permission levels for staff and third parties requiring access to your AWS account. In doing this, you can properly control what any given user has authority to do.
You need to be proactive about monitoring your costs and service usage to be able to easily recognize when expenditures are getting out of control. You will want to report on everything you are monitoring, ensuring the budget is being spent efficiently—a cost and usage report (CUR) will be most effective here. A CUR contains detailed information that you can use to monitor your costs and can be generated on an hourly, weekly, or monthly basis per product or resources. There is a CUR tool offered by AWS that can help make this process easier.
In order to get the information breakdown you need, it will require careful design and setup of your tags. The information put into the report can be sent to an Amazon Simple Storage Service (Amazon S3) bucket, and once there, you can use AWS Glue to prepare the data for analysis within AWS Athena.
To establish metrics for these reports, you need to know that costs are within acceptable levels. These metrics allow you to track your costs to ensure you are spending money within budget.
It is important to track resources through their lifecycle. When a service, server, function, etc. is no longer needed, it should be shut down and decommissioned. For example, you will incur charges when a virtual machine (VM) is powered on, even if it is not actively processing data. With that in mind, it is important to have a decommissioning process in place. You can manually hunt for the resources that aren’t being used, then decommission them. However, this can be a time-consuming task, and your time could be put to better use. Instead, consider using AWS Auto Scaling to perform decommissioning efficiently and automatically.
Trend monitors AWS Auto Scaling with rules to ensure you are properly configured and optimized. If you want to give it a try, sign up for our free trial.
The trick to cost-effective resource management is finding the right services in the right size and type to fulfill your workloads’ needs. AWS identified four things to consider when selecting resources:
I. Evaluate cost
In order to select the best resources, you need to start by identifying the business requirements and all of the associated workload components. Over time, a workload can change the amount of service it consumes, which means it’s important to review and analyze the services in use to ensure cost optimization. AWS does have a few tools, such as the AWS Cost Explorer and the CUR, to aide in your assessments.
Managed services can be a good solution for reducing cost. When you select a managed service, you are removing the burden of operational and administrative overhead. This allows you to focus your energy on what you do best—innovate. There are two approaches to consider if you choose to go in this direction. One is AWS Managed Service (AMS), which alleviates you from managing the infrastructure. The second is using serverless services, removing both the management of the server and infrastructure from your plate—both options are priced to scale cost efficiently.
Another huge cost that you may be able to avoid is the software licensing cost. It is a possible to avoid that fee by using open-source software. This may not work for all environments, but it is worth assessing.
II. Select the correct type, size, and number of services
It’s essential to find and purchase the exact services required to meet a workload’s needs. And while this will take some work to figure out, in the end, it could save your business a lot of money. Cost modeling allows for an analysis of predicted loads and the costs that would be incurred. AWS has a tool called AWS Compute Optimizer that can assist you with determining the right computing options.
It is good to establish metrics and monitor workloads on a continual basis. With metrics in hand, you can track and manage the services consumed to ensure that money is being spent wisely. AWS Auto Scaling can automatically adapt services to make this even more cost efficient.
III. Select the most suitable pricing model
AWS has many different pricing models to choose from. AWS Cost Explorer can guide you through the selection and what you work the best for your workloads. Models include:
- The on-demand model—this is the default pricing model for the cloud and follows the pay as you go structure.
- Spot model—utilizes spare resources with up to a 90% discount with AWS. These are resources that no other AWS customers are currently using. The downside to this model is that AWS could give you a two-minute warning that they need the resources back. Although AWS says that this rarely happens, it can, so choose this option very carefully.
- Commitment discount savings plan model—this is effectively a savings plan, allowing you to commit to using a certain amount of resources, then based on that amount, AWS will give you a break on cost. It is priced per situation.
- Commitment discounts reserved instances/capacity model—this is like the savings plan, but is only available for specific types of resources that can be reserved. AWS offers up to a 72% discount, again, priced per situation.
- Third-party agreements and pricing model—since this is not AWS, there is nothing that can be predicted here. All contracts should be reviewed carefully to ensure that cost will be optimized based on your workloads.
IV. Plan for data transfer
To effectively plan and manage cost, it is good practice to know what data transfer is occurring in your workloads. This includes knowing where the data flows, where it is stored, and who or what services need access to it. Knowing this information, you can make a more accurate decision on what architecture should be used to manage the transfer.
Manage supply and demand resources in your Well-Architected Framework
There is a balance to be struck between two basic configurations—having just the right number of services needed versus having high availability that protects the workload from failure by building in redundancy.
With the cloud, you only pay for what you use, and there are three ways to manage demand: throttles, buffers, and queues. Throttling can be done with an API Gateway, whereas buffering can be done with Amazon SQS and Amazon Kinesis. Building for redundancy requires a different set of configurations. It would not affect the cost of running a workload at any moment from a use perspective, however it does come with a bigger price tag if used. For example, if there is a failure within that service it will automatically fail over to another server or data center as needed. Having this automatic fail over capability usually costs a lot more money, so choose redundancy based on the business’ requirements and ensure that the cost is planned for and acceptable.
Demand-based supply utilizes the elastic nature of the cloud. The systems can scale resources up or down as needed. AWS Auto Scaling can assist with the management of scaling at a predictable performance rate, while keeping the cost as low as possible. As with most AWS resources, Trend has written rules to ensure successful management of your cloud environment. Using Trend, AWS Auto scaling could be triggered when certain conditions are reached, per your configuration. Then, the notification that triggers the scaling is sent using Amazon CloudWatch.
Time-based supply allows for resources to be managed and scaled when they are predictably going to change over time. The goal here is to ensure resources are available at the time they are needed. AWS Auto Scaling can be used for this type of scaling as well.
Cost optimization over time
Service offerings change on a regular basis, so it is always a good idea to periodically review the choices you’ve made. New innovations may allow you to find a more cost-effective or performance-efficient choice.
There should be a workload review process established to continuously assess the services in use. Workloads that have a higher price tag should be reviewed more regularly than those that cost less. If a decision is made that it is not financially logical to change services now, that does not mean that the same answer will be true in six months, a year, or more.
Adhering to the Cost Optimization pillar
Managing cost is incredibly important to every aspect of your business, especially when you’re making a significant change in culture or infrastructure, such as shifting to the cloud. Taking everything you’ve read, and carefully designing a set of processes that align with your businesses’ needs, you’ll be well on your way to adhering to the Cost Optimization pillar in the AWS Well-Architected Framework. What’s more, once you’ve laid the groundwork, it can all be automated to enable you to do more of what you love—build great applications.
Interested in knowing how well-architected you are? Check out the free guided public cloud risk assessment to get your own results in minutes.