Data Analytics / By Chris O'Brien Logs began with UNIX in the 1960s, partly to preserve the culture of close communication in programming. Luckily, that culture has held fast as programming and technology have taken many different shapes and evolutions over the years. Today, the idea behind logs is still to maintain data for correlation and analysis to meet enterprise security and compliance needs. Logs enable you to troubleshoot system issues and support security monitoring; in fact, having logs available to search can be the difference between detecting and triaging threats or performance issues, or missing both completely. In either case, not having logs available can be detrimental, yet some businesses still struggle with proper log management. Why enterprise log management is broken… Environmental complexity is at an all-time high. IT operations is responsible for monitoring and managing environments with hundreds of systems and thousands of data types across on-premises, cloud, multi-cloud, and hybrid infrastructures. As if this wasn’t enough, data volumes have reached petabyte scale, and we’re looking toward exabytes and zettabytes. However, the legacy platforms largely in use are not well suited for the volume and variety of data today; in addition, distributed architecture represents a huge organizational challenge when attempting to gain full visibility. As a result, data ends up being siloed and businesses can only conduct batch processing of data sets instead of what the really need—gaining real-time insights from their data at petabyte scale. In short, enterprise log management (ELM) is broken because it doesn’t have the right architecture, flexibility or agility to be successful. In many businesses today, missing data is precluding the full visibility that’s needed to monitor environments and make informed decisions related to business or security. In other cases, the volumes of data being collected could be utilized, but the data is siloed or otherwise missing, preventing that full visibility. In any case, when legacy architecture is exclusively in place and flexibility and agility is limited, performance suffers and queries can take days. In today’s always-on business environment, queries need to be real time. …And what to do about it Now that the challenges facing ELM are established, it’s important to look toward a solutions mindset. What can be done to improve ELM, and what are the benefits? Get cloud-savvy solutions for comanaged environments. Not only is it more cost-effective, it’s a highly efficient use of hardware and storage to collect and analyze all data and turn it into action. Establish full visibility via true ELM to ensure everything is in one place. Consider platforms that economically collect historical and real-time streaming data in a single pane of glass to deliver full visibility. Leverage a platform that is extensible and allows usage of core capabilities to supercharge existing applications, as well as share data sets across teams without duplication.. Use tools that deliver petabyte-scale analysis with good performance. Performance is key—if large volumes of data are in a cold data storage service, logs can’t be quickly searched to detect and triage threats, for example. The outcomes of better ELM, and what’s next The road to ELM can sometimes be paved with too many tools, too much data without enough visibility, and a stretched IT staff. However, with a simple-yet-powerful platform that collects and maintains logs, businesses will secure coverage of all their data that matters in a single place, further ensuring queries at the speed of threats and a strong foundation on which to mature security capabilities. Check back for the second and final installment of why enterprise log management is here to stay. We will impart further wisdom on the importance of logs to any business’s security posture. In the meantime, find out why log management is critical for business intelligence.