Serverless Security
API: The Unsung Hero of the DevOps Revolution
Watch this session to dive into the capabilities of APIs, as well as how organizations have used APIs to solve problems and add security to their value stream.
Transcript
Vance [00:00]
And the Enterprise Integration Summit rolls on, welcome to the session for Trend Micro. Let me introduce our speaker, Rob Maynard, Global Solutions architect. Rob, welcome.
Vance [00:32]
You know, we're really glad to have Rob with us this morning. He focuses on DevOps, container architectures, and cloud and hybrid computing and particularly APIs. And prior to Trend Micro, Rob drove API and DevOps success at Ford Motor Company and Volkswagen. So, we're going to get a great technical overview as well as some great user tips in his session. APIs, the unsung hero of the dev ops revolution. You know, we know many of you were just starting or are struggling with DevOps. Rob is here to show us how API are proving to be a backbone for DevOps success. He'll show us how APIs provide the power to connect desperate silos or services into one well-oiled machine. He's also going to reveal how companies leverage API connectivity to power and solve DevOps problems, as well as add security to value streams. So, a great session. Before I hand it to Rob, just a quick note. You can download the slides. Just hit that big red button you see there. We've also assembled some great white papers and other downloads for you. So, click away on those if you want. And any questions, just type in the submit a question box. So, with that right. Let me turn back to you and tell us about APIs, the unsung hero of dev ops revolution.
Rob Maynard [01:16]
Yeah, thanks, Vance. Hi, everyone. I'm Rob Maynard, Global Solutions architect, as Vance mentioned, I'm in the research and Development Department at Trend Micro. I've been working in the IT field for about 12 years in various capacities. Prior to living my life in front of a computer screen, I was an infantry man in the United States Army from 2002 to 2006. Most recently, I've been working in the DevOps world, both on the dev and ops side of the spectrum. Today we're going to discuss the concept of systems thinking and more specifically APIs and how they help organizations achieve the mentality of systems thinking. We'll also discuss about how APIs can help enable security to not be a barrier to innovation, as well as some security risks that affect APIs. The DevOps movement has caused a paradigm shift in the way that organizations structure their information technology strategy. Organizations who have embraced this shift and adopted the practices that the DevOps movement has to offer have been able to increase productivity, lower downtime and increase experimentation and increase morale among the IT staff. These practices boil down to three philosophies, or the three ways systems thinking, amplifying feedback loops and creating a culture of experimentation and learning. Today, we'll focus a little bit on the technological side of the first way and how to take what used to be individual silos or systems or teams and turn them into one value stream.
Rob Maynard [02:43]
In traditional I.T. departments, if something within a value stream or application goes wrong, it's generally up to one team or one department or in the case of some poor suckers, one engineer that has to basically get up and start troubleshooting that issue. Typically, these teams or silos, as we refer to it, are only responsible for that one component. And due to either lack of collaboration time or whatever the case may be, really have time for proactive maintenance. I've seen the situation firsthand many times, and it makes for a difficult work environment. The concept of systems thinking states that a component of a system will act differently when isolated from the systems, environment or other parts of the system. This means that if you look at just one piece of the puzzle, we missed the big picture and can therefore add technical debt or simply delay a larger problem in the system. The phrase putting a Band-Aid on a gunshot wound comes to mind. This concept doesn't apply to just troubleshooting, but also system design. It's important when architecting a value stream system or application to also take into account the entire system, not just the individual components. Security, for example, is something that in the past has usually been an afterthought, a system or value stream is designed, built and deployed, and then security is added by a team who had very little to do with the architecture of the whole system. Generally, the security goes back and adds a security mechanism and more often than not, some part of the overall system fails or breaks down.
Rob Maynard [04:17]
By using the concept of system thinking this can be avoided because there's no longer one silo where security lives, it's considered a part of the whole system. Now, so far, we've been talking a lot about systems, but what does that look like in the modern enterprise. Now, generally, it's a mixture of technological systems like servers, applications, databases. But there's also things like processes and people that make up the value stream as well. Sometimes there's approvals that need to be made and those are done by people very often in a completely different system than the original system that we're talking about. Technological way, we can connect these systems by the use of automation through APIs. An API, or application programming interface, is an interface that allows two applications or systems to talk to each other. This can be done programmatically using HTTP calls written in a programming language, and there are different types of APIs out there. Today, we're primarily going to look at REST and RESTful, which typically operate over HTTP and HTTPS. HTTPS should be really the only answer there. But, you know, nobody's perfect. Now, there's another aspect of modern systems that comes into play nowadays, and that's the cloud, the enterprise today you might have a database IN Azure, you might have some lambdas in us and perhaps you have some components in GCP, all working together to form one application or one system.
Rob Maynard [05:45]
Now, for those of you who have any experience with cloud computing, you know that cloud providers such as Amazon Web Services rely heavily on APIs to deliver their services. And likewise, the consumers of those services use those same APIs for configuration and management. That same concept can be used for an internal system as well. So, I worked at an organization a few years back that was implementing an on-premise self-service environment deployment system for the development teams to be able to spin up resources they need to do testing prior to deploying their applications. So, the organization they leveraged VMware vRealize Automation for the front end. However, there's a lot of other components that needed to be taken into account, even to just build one simple server. One of these was subnet and IP assignment, which was handled by another piece of software all together by using the API of that system. When a system was being deployed by vRealize automation, it was able to make a call to that API, retrieve the networking components it needed, and then the organization was able to connect these two disparate pieces of software into one value stream. And that being the self-service deployment is a very simple example and really just a snapshot of one piece of a much more complex system. Another general example touches on the ideas of continuous integration and continuous deployment. A couple of buzzwords in the DevOps revolution.
Rob Maynard [07:07]
A CI/CD pipeline is a way to automate a software delivery process. And you may hear of the CI/CD pipeline as the backbone of a DevOps organization. However, these are generally built using the API of the individual components of the pipeline. So, I'd argue the API is the actual backbone. Everything from the building, testing and delivery, whether that be making the software available or deploying automatically, that whole process is automated. This means connecting code repositories to testing systems and any other systems that need to be in the flow. And this is all done through API. I'm going to pick on Jenkins', which is a popular C.I server used to manage CI/CD Pipeline out of the box, Jenkins' doesn't do very much, it requires plug ins to connect to things like GitHub, where the code might be stored, or AWS where the software might be deployed eventually. And the way these plug ins are designed is they tap into the API of whatever system they're looking to connect to. So, in the case AWS, it actually reaches out to the AWS API to be able to do what it needs to do. Likewise, with GitHub, it ties into the GitHub API and uses the capabilities within that API to complete the task of pulling down code. Now, for those of you who are new to the CI/CD pipelines. Just a quick overview of what that is. It starts when a developer merges a code branch, so some developer dreams a new feature for his application and emerges that into a GitHub code branch. And that feature is now available in the application. But first, it has to be built out. So that triggers a Jenkins' job and it makes a call to the GitHub API. It's going to pull that code down.
Rob Maynard [08:47]
It's going to build that code. During the build phase, I might want to include some kind of security mechanism, like a runtime application self-protection module. From there, it gets subjected to testing like link testing, unit testing and whatever else developers are into nowadays. It's also a good point to add some security testing. For example, in the case of containerization, which a lot of developers are using to deliver their software, I may want to reach out and use a pre-runtime scanner to make sure that that container image I've just built, it doesn't contain any vulnerabilities, malware or any hard coded secrets that a developer may have embedded into the software on accident. Testing is successful, the final product can be made available for consumption or it can be automatically deployed. In addition to utilizing the APIs and the individual components of the system, organizations also have the ability to design their own APIs, and then that is a layer to the system as a whole. This allows other value streams, whether that be internal or external, such as a partner organization, to connect to that system when needed, which helps further streamline business processes.
Rob Maynard [09:53]
An overarching API layer also allows consumers to take advantage of all the backend services within that system. Instead of having to reach out to the components on an individual basis. An industry that's fully embraced the use of APIs is retail. They make a really good example of how APIs can tie disparate systems together into one value stream. So, for example, many organizations in the retail space are developing mobile applications and that allows customers to purchase items and have them delivered right to their home or just learn more about the organization. Now these mobile applications are using APIs to connect to inventory systems, supply chain management systems and partners of that retail organization. Generally, these applications will also tell you where a store is located, will give you a nice map to follow. Thousands of apps and websites utilize Google Maps API for just that functionality. While these individual components utilize APIs internally, say, to update the inventory system or go to the order management system. Very often there's an external API allowing for other applications or websites to tie into that retail organization and make purchases. Amazon is a great example of this. As oftentimes you're able to purchase via Amazon from non-Amazon websites. So, another example of this comes from one of the largest manufacturers of automobiles and one of my old employers, Ford Motor Company. Their app-link API suite, allows developers to connect their mobile applications to the vehicle infotainment system. Sync. With this, the application itself runs solely on the mobile device, so your iPhone, or Android phone, or whatever you're into. And then it uses the API calls to exchange the program data or command information with the actual Sync infotainment system in your car. This helps developers so they no longer have to develop an application UI specifically for Sync, and it allows Ford Motor Company to support more applications in their in-vehicle infotainment systems.
Rob Maynard [11:57]
And we touch a little bit on this already, but what we typically get asked by our customers at Trend Micro who are going down the road of automation, is how they can integrate security into their systems without slowing down the pipelines that they're building. The answer here, of course, is the same as with any other component via APIs. One of the newer use cases we've been seeing over the past couple of years revolves around Serverless. Serverless, as in AWS Lambda or Azure functions, allows customers to run code in the cloud on a consumption basis, but without access to the underlying host, because even though it serverless, it is running on somebody's server. So, without access to that server, it becomes very difficult to secure that piece of software. And that's where something like a RASP, or runtime application self-protection unit comes in handy. A RASP gets embedded into the software by way of import statements the developer can add it on their own without involving the security team. At Trend Micro, we have Cloud One Application Security. Using something like Cloud Formation to deploy that serverless app, the API calls needed to activate application security with the managers as well as configure settings and policy can be included, so the application is protected right at launch. What we typically see is a system that is made up of multiple serverless scripts and a number goes up when talking multiple systems. Using the API to configure the raft does two things. One, it allows developers to continue innovating without worrying about being slowed down by security and, two, it ensures that that application or piece of that system is protected right out of the gate.
Rob Maynard [13:55]
Now, another popular use case that we've been seeing, at Trend Micro has to do with containerization. Many development teams are delivering their products containerized nowadays, as discussed above, and this is usually done via automated pipelines like the CI/CD pipeline. Therefore, developers once again do not want security to be a barrier to innovation. So what we suggest at Trend Micro is at the security into the pipeline. And once again, this can be accomplished via the API. The first way to do this is once again including a rasp within that containerized application, like we discussed with serverless. That way the developers can import the rasp library on their own, and then during the build process, the API can be leveraged to activate and configure the security within the rasp. Second method is by using pre runtime scanner. So as part of that testing piece within the CI/CD pipeline, we simply make an API call to a pre runtime scanner. For example, Trend Micro, we have Cloud One container scanning and we use an API call just to initiate the scan. And perhaps if a certain threshold is met like a certain number of vulnerabilities or if it contains any malware, we can stop that pipeline and report the findings back to people who can fix whatever the issue is. The more advanced scenarios are in conjunction with cloud service provider APIs further action can even be taken through the use of server technologies. For instance, we can send that bad image to a locked down repository where it can't be launched, or, in the case of one of our partners, can even launch a pull request and GitHub to update vulnerable libraries.
Rob Maynard [15:11]
Some other cases include leveraging Kubernetes and their admission controllers, in which case you can simply hit the Kubernetes' API and prevent one of those containers from running. So, we have some customers that use our Cloud One Workload Security product, and that's our server security toolset. And they use it only via the API. They absolutely never log into the console. They never want to log into the console. They simply use the API and they accomplish everything from pulling down the agent installation script to adjusting policy through automation. And when a new instance of a server spins up, it's automatically protected. And if any changes to the security policy are needed, it can be made via a sweeping change by making a call up to the API. Now, for those thinking about introducing a custom API layer to their system, there is some best practices to keep in mind. And first and foremost is your API should only communicate over https. This is nothing new. This is the same as any other web traffic. We want to ensure that that web traffic is encrypted. By doing this, you not only encrypt the traffic end to end, but you can also simplify the authentication credentials to a randomly generated token. Further, on the subject of authentication, you have to make sure that all end points are protected behind authentication if you're not using an API gateway. We'll discuss a little bit about API gateways in a minute. But if you're not using it every endpoint that you have, you need to make sure that it's getting authenticated.
Rob Maynard [16:43]
It's also very important to be aware of how the URL for your API and points are crafted, a lot of information can be exposed in the URL, things like passwords, API tokens, API keys and session tokens. And for obvious reasons, you don't want that to be exposed. So, you want to make sure that the URLs that you're crafting for your API endpoints don't have any information like that. Some of the best practices to take into account are things like adding timestamps as a custom header in your API request, which you can do then is set up so your API only accepts request if it is within a certain time frame. This helps prevent against replay attacks and brute force attacks. It's just good practice to have in your API. If possible, it's also good practice to use an API gateway, as we touched on prior, an API gateway access, a reverse proxy to handle the incoming API calls and direct them to the endpoints and back and services. It also handles the authentication and rate limiting to help prevent malicious actors from doing bad things. From a performance standpoint, it allows you to enable cashing as well. It also simplifies the authentication process as you only have to authenticate at the gateway and not at every end point. So, this is a real easy way to ensure that authentication is handled in front of all your endpoints. You can also add monitoring and analytics tools to understand how people are using your API. This is amplifying the feedback loop. So that second way, this allows you to make informed changes to your API. What's also nice about the API gateway is that if you do make changes, you don't have to redo any you URLs or DNS because you're still pointing to the API gateway and there's no changes for your end users. So, I hope that this has helped you learn a little bit about how you can use APIs in your organizations to really develop a mentality of systems thinking, really looking at our systems as a whole, looking at things as a whole will help when things go wrong. We aren't just identifying individual problems and adding technical debt that maybe or their ugly head down the road. So, with that, thank you very much for your time. I'd like to turn it back over to Vance.
Vance [19:00]
Wow, Rob. Really great sense and really love how you put together the ideas here of the context of integration and API professionals in this whole new world of cloud native and DevOps and CI/CD. Really, really great session.
Rob Maynard [19:12]
Thank you. And thank you for having me.
Vance [19:15]
Yeah, it was our pleasure for sure. You mentioned questions. We certainly do have some. Let's start off with a big picture question, because there are a couple that feed into this basic theme that you had. You know, many of our attendees, as you might expect, Enterprise Integration Summit, have been using APIs and REST in particular to integrate apps or get data to mobile users for many years. And they don't often think that they can pivot their skills to DevOps or CI/CD. So maybe give us some advice on how API experts can play a role in this new cool world of CI/CD.
Rob Maynard [19:49]
Yeah, absolutely, so, you know, as I stated in the talk with DevOps, all these connections to different systems, especially within the CI/CD pipeline, a lot of people think that's the backbone of DevOps. But because you have to connect them with APIs, APIs, I would argue, really are the backbone there. And it's really on the experts to help the rest of the organization understand how they can leverage these APIs within their dev ops organization. You know, part of the DevOps movement is cultivating an environment of learning. So, for all the experts who already know how to leverage APIs, I would say it's up to them to really spread the good news to the other people on their team and just kind of experiment and let the rest of the group know that, hey, this is what we can do to make ourselves more efficient. Anything that can be automated should be automated and all that's done through the API.
Vance [20:41]
Yeah, really great point. And in fact, you also brought up the point about how API can power DevOps resolution, particularly in security and value stream management. So, lots of opportunity for API people to play a role in the CI/CD, right Rob. So, let's drill into some of the specific ways. We have a couple of technical type questions here. Let's go into those. First off, we talked a little bit about permissions and granting access. Talk a little bit more about the types of ways that you're seeing companies look at use cases to grant access to data through an API.
Rob Maynard [21:21]
Yeah. So, you know, whenever you grant access to anything, whether it be via the API or even just a Windows file share, it should always be the philosophy of least privilege. Right. You don't want to give everybody the ability to write. You really just want to give people enough permission to do what they need to do. If they only need to get the name of a computer by pinging the API to do that, then they should only get the computer name. That really doesn't change from what we all learned in school. It should just be the least privilege, really.
Vance [21:50]
Yeah. And let's go into the Trend Micro way of how you guys look at this whole idea of data transfer through an API, talk about how and when encryption might be necessary for that kind of transfer and how Trend Micro makes that so much easier.
Rob Maynard [22:04]
Any time you're transferring data once again, whether it be API or anything, but anytime it's going across the wire, encryption should always be implemented. So, at Trend Micro, that's something we really preach, really give that advice to our customers who are going down this road and just need our advice in our own APIs. We enforce the encryption across our channels. So any time that there's data going across the wire, we need encryption there.
Vance [22:29]
Wow, Rob, so you take the approach this rather than get granule into the weeds about what if scenarios and case studies, that the Trend Micro approach is that default to encryption and then the API will make that much easier to accomplish.
Rob Maynard [22:45]
I mean, absolutely, I mean, in this day, with all the breaches and things that happen on a daily basis, absolutely, encryption should be the number one thing that you put into place. It's very easy. All systems give you the option if they don't enforce it out of the box to use encryption. So absolutely. If it's there, use it. Definitely.
Vance [23:04]
It's that simple. That's great. You know, we talked about the data in the pipe. Let's talk about the actual end point of the pipe. Another question here talks about what types of suspicious behavior should we be on the lookout when it comes to keeping our APIs safe and secure?
Rob Maynard [23:20]
So certainly, common attacks. You know, replay attacks just constantly trying to ping a resource to basically initiate a denial of service, brute forcing. Those are types of attacks. So what you would see on the end user, and we'll call it the endpoint end, would be a lot of traffic, really trying to really hammer a certain end point. And you can imagine if you're only getting a small piece of data in JSON format of that API and you see a lot of traffic coming in, you can pretty much guess somebody's trying to do some dirt or some other things to really look out for is you have the possibility of seeing request logs. Are people trying to poke around with your URLs? Are they trying to really experiment and see what else they can get out of there? Or are they trying to force URLs to do stuff that you didn't design within your API? Things like that.
Vance [24:10]
Really excellent. Really excellent list. This has been a fantastic session, Rob. I see time's just about up. But before you go, give us a suggestion of where folks can learn more about Trend Micro or even get a demo or go more hands on with the technology. Not only do we have guys that are in the API security space, but as you mentioned, we've got the DevOps folks. We also got a very strong hybrid integration community here that are looking to tie together their on-premise and cloud systems. So maybe give us a couple of highlights of where we can send those folks.
Rob Maynard [24:42]
Yeah, absolutely. So, some of the information that we provided will link you to a couple of the products that I mentioned in my talk, one of which is the Cloud One Application Security, which is our RASP product, can get embedded in really any web application. But for our cloud users out there, Lambdas, AWS functions, that type of thing, embed that in there. And one other thing, speaking of the CI/CD, I also mentioned this in the talk is our Cloud One Container Scanning. So that set free runtime container scanning mechanism and that fits right into the CI/CD pipeline as part of that automated testing piece. And all that can be controlled via API. And for anybody else in that integration space, definitely check out our Cloud One Workload Security that is really our flagship cloud security suite and that will help you protect to Ec2 workloads and get better visibility into your cloud landscape.
Vance [25:35]
Wow, great list of assets there. But what I really like about it is the Trend Micro has taken this approach much like the API professionals who these days this idea of an API lifecycle. You talked about the pre provisioning aspect as well as once it's out, up and running. So really great list of assets. We really appreciate your time.
Rob Maynard [25:54]
Thank you, man. That was a lot of fun, and happy to be here.
Vance [25:57]
Yeah. And we're happy to have you here. So just a quick reprieve for our attendees. Rob mentioned many resources. We're lucky enough that he and his team got together. We've got a list of links there will take you to many of them. And again, let me recommend you take a look at the slides. That big red button will get you them right away. For those that we weren't able to post here in the breakout room, we've got good news at the end of Rob's deck. We've got a slide here. We call for more information slide. You'll be able to go directly to the Trend Micro website and get many of the other assets we weren't able to fit here. So, with that, let me thank Rob Maynard for a great session. And thank you all very much for attending.