Machine data processing and 5G, IoT, and AI at Mobile World Congress 2019

One thing that’s become evident to me after years attending Mobile World Congress is that, in fact, there are several events  running in parallel, with a few common denominators: network technology providers, device manufacturers, telecom operators, and services companies all come to Barcelona to present and demonstrate the latest and greatest of the year’s dominating trends. It is a fascinating process that shapes the communications services landscape and results in new technologies in as little as 2-3 years for us as consumers.

This year’s Mobile World Congress news was dominated by 5G, IoT, artificial intelligence, phones, and autonomous cars, and it was really difficult to escape those topics. While you might think Devo has little to do with foldable phones or autonomous cars, there are more subtle ways in which these big trends align with our portfolio of solutions.

Indeed, a significant portion of the customer interactions we had revolved in one way or another around those headlines, but equally interesting was the ‘red thread’ in what we heard from most of the services providers that visited the Devo booth: the need to ensure today’s operations at scale while preparing for the challenges and complexity 5G and IoT are bringing.

Massive data processing for multiple applications, not only network and services operations but also security, fraud detection, etc., is a well-understood problem, but solving it efficiently at scale from a cost and time-to-market perspective is a different kind of animal.

Let’s discuss in greater detail:

  1. We are in a kind of ‘second wave’ in machine data processing, where virtually all telecom providers already have these capabilities in place for some time – but very few telecom providers have met their machine data processing needs completely. As a result, there is an ongoing trend they end up re-visiting: how to move beyond heavy, complex, heterogeneous and difficult-to-maintain data processing in the hope of finding better solutions.
  2. Of paramount concern to most prospects was understanding how data processing solutions (and Devo in particular) could manage the complexity of integrating data sources; processing and retaining volumes of raw data; and solving their difficulties in distilling value in reasonable time frames. After all, dealing with hundreds of different sources has become the norm.
  3. Machine data processing at scale isn’t only difficult to handle from a technical perspective once certain levels of data ingestion are reached, (tens of terabytes per day), but also prohibitive from a cost standpoint in both licensing and infrastructure allocation.
  4. The promises of centralized data lakes that effectively tear down silos and truly democratize data are therefore undermined due to all the previous factors.
  5. Realizing the ultimate value of machine data processing at scale is very present in everyone’s mind: efficiency in operations and reduction of cost or best-in-class customer satisfaction indicators are seen as the real goals to achieve.
  6. Companies won’t accept solutions that place limitations on the future, understood as any offering that limits incorporation of advanced logic to an existing data management solution: all data should be easily accessible to feed ML/AI processes, either implemented by in-house data scientists and engineers or by external providers.

Again, this was just the depiction of the current landscape. One can only expect these needs will become even more urgent to address with the increase in machine data to process once 5G is running at full steam, once IoT applications begin to proliferate, once there is a full transition to microservices-based architectures, or once VR/AR services become mainstream.

In many of our interactions, the topic of use cases came up. These are the top three I would personally highlight:

  1. Operating more, operating faster, operating better: Competition in the sector leads telecom providers to increase both the variety of their services and the complexity of their infrastructures. With a total ‘lights-out’ (fully automated operation, adorned with self-healing capabilities) still too far in the horizon, any help in making operations more efficient was absolutely welcome.
  2. Machine learning / artificial intelligence is the holy grail for operations: It was difficult to get into any interaction where there wasn’t an explicit mention of AIOps-based use cases. While everyone agrees with that vision, it is also true that ML/AI is not an all-or-nothing philosophy; this means, there is an important number of use cases can still be materialized in terms of pure data aggregation, correlation, and ‘basic’ analytics without the need to overkill.
  3. Need to maximize autonomy in the implementation of use cases, since ‘no one knows our customers (or network, or business) better than us’. Captivity-prone platforms that either retain data or impose the use of third-party modules or services is perceived as a critical drawback since the nature of data management architectures is to grow horizontally in volumes and vertically in supported functions, lead to a vendor lock-in situation.

Such shared views on concerns and use cases led many times to an equally common set of recommendations from our point of view, all of them derived from our own experiences and the challenges dealt with in the last years and very much aligned to our own company’s DNA:

  1. Machine data processing at scale is an investment, and as a result, it must lead to tangible value. Implementing large and complex infrastructures for data processing without a clear return on investment is a journey that’s better not to start.
  2. Don’t underestimate time-to-value when setting up or evolving existing data processing solutions, and also the ‘day after’, i.e., the difficulty to operate and maintain high performance architectures: consider aspects such as ease-of-use when supporting new data sources -which can and certainly will amount to hundreds of sources; plan for the evolution of internal data models and do not tolerate downtimes due to changes in the data formats or schemas.
  3. Not all companies -same goes for departments within telecom providers- are prepared to make the investment needed to set up, maintain, extend and evolve a data processing solution fully in house. Nor are they able to bring highly specific skill sets onboard to extract value. Visibility into a company’s data is about making data an accessible tool to positively influence the business. It’s not about making data a goal to achieve.

It is difficult to step out of MWC 2019 without a high level of excitement and fully energized. This edition was memorable in so many aspects, but from a personal standpoint, I can’t help but remember so many good interactions and the constant feeling that, from a Devo perspective, we do have a truly unique proposition backed up by a truly unique technology, and we felt the same energy from those who visited the booth. Here’s to continuing the conversation at next year’s event.