Logs are critical for detecting and investigating security issues. They also provide essential visibility into business operating environments.
Many organizations, when they are small and just starting out, can get away with using a local log server and storage to collect data. Almost all security teams start off with this kind of on-premises logging approach. Most teams use an open-source, homegrown solution for this type of short-term, small-scale log analytics. When you need to secure just one or two small environments, an isolated logging solution will suffice.
But as environments grow in scale and complexity, spreading across multiple data centers and cloud providers, it becomes necessary to collect logs from all areas of the business. You need to collect logs and events from firewalls, VPNs, IPS, endpoints, and other infrastructure to catch indicators of compromise (IOC) and other security threats before they impact the business. You also need to store this data long-term for investigation purposes, such as determining the first occurrence of a compromise. When your organization has grown to the point where your security team must watch thousands of devices across datacenters and cloud providers, it’s time to centralize your logging, and move it to the cloud.
The Devo eBook The Shift Is On makes the case for centralized security logging in the cloud, including five best practices that will help you overcome the challenges organizations frequently face as they shift to the cloud:
- Define your short-term and long-term use cases
- Inventory all your data sources
- Build in tolerance for change
- Tailor the implementation plan to your users
- Plan and build best practices for your future business
Best Practice #1: Define your short-term and long-term security logging use cases
Nobody can do everything at once. Set realistic goals about what you want to do in the next 90 days, 6 months and 1 year. Trying to boil the ocean can quickly sink a centralized logging project. Pick the most error-prone applications or services that consume the most troubleshooting time. Eliminate gaps with those problem areas first by instrumenting them end to end. Make sure your most critical assets are the most secure. Then tackle your more long-term use cases (such as digital transformation) after achieving success with these foundational steps.
Best Practice #2: Inventory ALL your data sources
Gaps in visibility—even small ones—can completely derail a CLM project. You can’t secure what you can’t see. Try to think about every attack surface and all the ways a threat can move laterally across multiple environments. Don’t leave anything out. A good CLM can collect data from just about any connected source.
Best Practice #3: Build in tolerance to change
One of the great weaknesses of most CLM projects is if you change hardware vendors—or sometimes if there’s just a hardware version change—your log formats can change and break your log parser. This means you are no longer collecting data, or the data has become unusable junk. There are two ways to overcome this: First, you can establish and enforce a log governance model. Or second, use a solution that does not parse on ingest. Of these options, the second is far superior. Enforcing a log governance model is difficult, time-consuming, and requires a great deal of effort to maintain in a changing environment. Having a CLM solution that parses on query (instead of parsing on ingest) means changing log formats will have no effect on data ingestion, or the ability to analyze logs.
Best Practice #4: Tailor the implementation plan to the users of your centralized log management solution
One of the biggest challenges with log data is actually analyzing it. Humans are not great at quickly poring over thousands of rows in a table. Your CLM solution should provide some impressive data visualizations. The more chart and graphing options, the better. Beware of solutions that limit you to out-of-the-box visualizations that may not work for your use cases. Because you will be combining and analyzing data from many sources, it will be helpful to look at that data in multiple ways. But the solution shouldn’t be so complex that only a few very specialized people can operate it effectively. That drastically reduces the value of any solution, no matter how good it is. Also, beware of solutions that use proprietary scripting languages or require coding to build visualizations.
Your CLM solution should have an intuitive UI that enables users to analyze and visualize data quickly and accurately. The value of a CLM solution is multiplied by the number of users who can leverage it to obtain needed answers. Be sure to give analysts and users of varying skill levels the views they need to do their jobs effectively.
Best Practice #5: Plan and build logging best practices for your future business
It’s vital for a business to grow. But rapid growth can introduce scalability challenges that could bring your CLM system to its knees. When this occurs, the automatic response is often to cut back on monitoring to reduce data ingest, compute, and storage. But this is a mistake because it reduces the value of the solution in which your business has invested. Another pitfall that careful planning can prevent is to successfully implement a CLM solution that only lasts for a couple of years before your storage and compute needs outgrow it. This forces you to re-architect, re-purchase, and re-deploy.
The best approach is to use a cloud-native SaaS solution. This enables you to increase capacity on demand without having to budget, acquire, and deploy new hardware. The SaaS model makes it easy to quickly scale up (or down) based on the needs of your business.
To learn more about centralized security logging, download The Shift Is On eBook.