Will IoT Drown in a Data Lake? It’s Time for the Event Lake

We see IoT being applied everywhere from wind farms to medical devices to smart home devices. Yet all the myriad use cases have one thing in common–insight into what’s happening on each device depends on the ability to capture all the events, and aggregate and enrich them with key information in order to create “events-of-interests” that can be analyzed for anomalies and quickly acted upon. An event-of-interest could take just about any form–outages or temperature variance on server fans or HVAC systems, a miniscule drop in performance from a mechanical part on an oil rig, an anomalous weather pattern, a combination of any of these metrics or a DDOS attack originating from security cameras. The list goes on, but what ultimately defines an event-of-interest is that it has some special significance to your organization and requires action.

While detecting these has been a primary objective of IoT since the concept’s inception, companies are finding that current information architectures are insufficient when it comes to enabling immediate action on anomalous occurrences. In fact, Logtrust’s recent study with 451 Research found that while 69 percent of respondents want millisecond latency, most say their technology isn’t even capable of five-minute latency. A new paradigm is required: the Event Lake.  

Data lakes are a step in the right direction, but we need to go further…much further

The ability to detect and act on events–particularly complex ones involving multiple data streams–requires a unified data infrastructure. Companies looking to break down data silos have latched on to the “data lake” concept which offers a means to store massive volumes of raw data in a single, universally accessible repository. According to Gartner, “By its definition, a data lake accepts any data, without oversight or governance.” Yet, the same thing that makes the data lake so attractive–the ability to ingest data without a specified schema–is also its biggest problem. Just like a high school class where the teacher doesn’t enforce rules, this kind of environment makes it difficult to learn anything of value, and is why data lakes have gained reputation for quickly becoming data swamps.

Organizations seeking actionable insight from IoT systems will have trouble gleaning useful intelligence without a team of engineers dedicated to extracting, normalizing and processing events-of-interest for each requestor. Clearly, such a model drains too many resources to be sustainable–as Andrew C. Oliver observed, “if this smells like building your own PaaS with devops tools, your nose is working correctly.

Additionally, such efforts to consolidate data can also have the unintended consequence of “tight coupling” various departments by tying them to a huge, murky data lake. Even when an event of interest is discovered, it’s not entirely clear who it matters to, and who should be taking action–what ought to be one department’s action item becomes everyone’s problem.

From Data Lake to Event Lake

The Event Lake offers a unique approach to the data lake, one which breaks down information silos without tight-coupling the various operational teams. Simply put, an Event Lake is an elastic repository where event logs are kept in their native format, and events-of-interest are aggregated and kept in a “hot state ready to be queried and analyzed.

Imagine for instance that, in an energy company that is collecting IoT data from power plant equipment, one of its engineering teams knows that a dip below a certain level in pressure for a certain length of time on a condenser indicates likelihood of failure. In an Event Lake model such a dip could be predefined as an “event of interest,” and as it occurred would be:

  • Ingested into the pipeline as raw data
  • Passed through a virtual layer where it is time-stamped and correlated with other incoming events
  • Tagged as an event-of-interest, based on its meeting of pre-defined criteria, and pushed out to the correct engineering department as an alert

One man’s trash is another man’s event-of-interest

In such a system users are able to subscribe to events-of-interest that matter to them. Massive parallel query and event processing capabilities are leveraged to extract and publish them to subscribers. This allows departments, or even authorized external parties such as repair contractors, to be made aware of and act on significant data events in real-time, without having to wade through or monitor data that’s irrelevant to them.

Furthermore, aggregated events can also be abstracted as a single “event-of-interest.” This is particularly important, as many issues that might require decisive action cannot be detected by a single event. For example, while a single condenser failure at a power plant might be of interest to a specific team, multiple failures on a certain combination of machines within a short time-frame might signify a more serious issue that requires action from multiple departments. Or, a financial services company might set conditions for an event-of-interest that involves simultaneous changes in multiple economic indicators that tell them it’s either time to buy, sell…or move to Canada.   

A not-so-humble feat of architecture

As you may have guessed, the Event Lake architecture requires a robust combination of elements to meet the performance demands of IoT scenarios. These includes the ability to:

  • Ingest event data at high velocity from any source without response time degradation as volume grows
  • Perform computations in real-time on complex streams of events in-fly and at-rest
  • Detect patterns by correlating multiple data sources–including structured, pseudo structured and unstructured–with long to very long-range historical data
  • Automatically notify the correct individuals, teams and applications when specified events occur

This combination of capacities enables a number of advantages, including greater efficiency in IT, DevOps, and data science functions–as well as greater collaboration, agility and a unified customer experience. Logtrust’s unique architecture provides a number of features that make the Event Lake possible:  

  • Linear scalability with ultra-low-latency response times
  • Ingestion of +150,000 event-per-second(EPS) per core
  • Ultra-low latency queries so that insights can be extracted from events as they occur in real-time–over 1,000,000 events per second for search and over 65,000 events per second for complex event processing
  • Massive parallel-processing in a cloud-based architecture that eliminates bottlenecks
  • No code/low code logic that enables business users to ingest and normalize various data sources on the fly, and execute pre-built queries in minutes

Case in Point: Saving the world…one set-top box at a time

It’s not difficult to imagine the possibilities. For example, consider the challenge that telecom companies face in maintaining a quality viewing experience on set-top boxes. One large provider with millions of subscribers uses the Logtrust Event Lake to monitor end-to-end media Quality of Service (QoS) of its digital TV and video on-demand service. They’ve implemented a real-time Event Lake spanning more than 4.5 million set-top boxes and back-end equipment that can instantly identify streaming/picture quality problems.

Depending on the nature of the problem (signified by an event-of-interest), the appropriate party is alerted. In some cases it might be an issue that can be fixed remotely, and an internal team is notified. In cases where on-site repair is required, a third-party contractor is automatically alerted and directed to the customer. In this way, the company acts as a single entity, though it’s actually relying on a network of contractors to service its highly complex system. The bottom line: problems get solved before the customer has a chance to complain–often before they even know there’s an issue. And if you’ve ever had your set-top box go on the fritz during the season finale of your favorite show (Game of Thrones!), or the final moments of your team’s big game, you know how much that means!

2017-07-06T05:24:31+00:00 July 6th, 2017|