Real-time Security Insights: Apache Pinot at Confluera

Pradeep Gopanapalli
Member Of Technical Staff

Data brings in an intentional, methodical, and systematic way to make decisions.

At Confluera, we are continually identifying and collecting signals from customers’ environments to protect them from potential threats. We wanted to create Security Insights and provide Observability into the customers infrastructure surfacing novel patterns, allowing them to tighten their security poster.

To enable this feature we went looking for a datastore that can power such insights in a timely manner, which eventually led us to the Apache Pinot project. This blog post will talk a bit about the architectural needs that led us to select Pinot and how it helped us build our product’s required capabilities.

Bit on Confluera’s Graph Building

To give a high-level overview, Confluera offers a threat-detection and response platform. It maps activities happening in an infrastructure environment into graphs by consuming system-level events from hosts that are part of the infrastructure.The graph is further partitioned into subgraphs based on individual intents, which is then fed into our detection engine for finding malicious activities.

This proactive real-time graph building helps our detection engine in a couple of ways:

  • The behavior that the threat engine is analyzing is put into a context.
  • Unit of detection is an intent-partitioned sub-graph instead of a singular event.

The above capabilities help reduce noise and save time by pre-building the attack narrative and providing a surgical response

See blog post to understand more about how we do this.

Threat investigations, Analytics & Security Insights

Once Confluera identifies a threat, we wanted to aid the investigation by providing a platform to search through all the events and at the same time provide a way to answer a query through slicing/dicing over the events through this threat-hunting platform.

Below is a screenshot of our product summarizing a multi-stage attack into a coherent storyline. In addition to this suspicious behavior, there’s a lot of benign activity that is not flagged. We wanted to close the gap by giving users the ability to dig through the entirety of potential attack activity.

Image for post

In addition to the above threat-hunting feature that is mainly suited for “war-time,” we wanted to build a view into the infrastructure’s “peace-time” behavior by surfacing interesting security insights through analytics. Example insights are infrastructure wide behaviors such as dns requests happening, programs connecting from outside, login failures etc.

Requirements

With the above product goals in mind we wanted a datastore which supports following features:

  • Real-time: Graph building and threat detection work in real-time — so we needed a datastore to be real-time too so that it integrates well with the rest of the product.
  • Low latency analytical queries: Most queries to the datastore would be over a subset of columns, and most queries are analytical queries.
  • Time-series: We needed a time-aware database since the queries we have would very much be bound to a time period.
  • Scalable: We are a write-heavy workload, and the event ingestion rate should scale with the size of the infrastructure.
  • Pre-defined queries: Since most of the queries are pre-defined, the underlying datastore must support executing these queries with a strict low latency SLA.
  • Data correction: We don’t expect data modifications in the streaming data to be inserted, but in any corrections or backfilling, the datastore should support the backfilling of the data.

Based on this high-level understanding of our requirements, we found OLAP datastores to satisfy most of the above requirements. We spent time comparing Apache Druid and Apache Pinot for our use case.

We chose Apache Pinot primarily because of the availability of star-tree index, low latency over a set of queries we were interested in, and a very responsive and active community.

Benchmarks

In this section we present benchmarks done to compare Apache Pinot and Apache Druid in terms of latency and throughput. Apache Druid was set up with 2 data nodes (historicals and MiddleManagers), 1 query node and a master node (Coordinator and Overlord) with historical and broker caches disabled. Apache Pinot was set up on 3 nodes, with 2 server nodes and 1 Node containing Broker & Controller. Each node is of aws instance type m5a.2xlarge (8vCPUs and 32GB of RAM). We loaded both the systems with the same dataset which has 700 million rows, enabling similar indices.

Below is a latency comparison on some of the queries we tried on Apache Pinot vs. Druid. Latency numbers shown below are obtained from the query metadata reported by both the systems as part of their dashboard. Results were captured once the latency stabilized for both the systems to avoid cold data or indices.

Image for post

As shown above, Apache Pinot was able to support lower latencies for all these queries.


Throughput test is done using python clients of pinot (pinotdb) and druid(pydruid). The following graph displays the throughput observed on pinot vs druid with basic setup (i.e. no tuning). The query used for this throughput test is as follows: “Select count(*) from table where col_A = X”

Image for post

As seen in the graph, Pinot is able to sustain a higher throughput albeit with increased p99 latencies.

Apache Pinot’s Star-tree index and how it sped up our analytics queries

Star-tree index provides a way to trade off storage for latency by doing smart pre-aggregation over a set of columns of interest. On top of that, enabling this index is just a config change away and reloading the table makes the index available for older segments.

Queries that are part of our security insights and analytics queries are pre-defined, and by using this index, we were able to optimize the latency into the levels we desired. The below chart shows the speedups we achieved on one such aggregate query over data corresponding to different periods.

Query latencies with star-tree index are ~60x times faster compared to inverted index. Query is of the format “Select col_A, count(*) from table where <time period=""> and col_B filter group by col_A order by count(*)”</time> with col_A, col_B part of star-tree index.

Image for post

Please note : the latencies shown above are in log scale.

Star-tree index has been handy for us in introducing new aggregate queries into the product in a relatively short period of time — since the amount of time required to tweak the latency to reach our requirements is just a config change away, provided the data is already consumed by and is in Pinot.

Set up with Pinot

Currently, Pinot’s setup consumes data from the graph-building engine through Apache Kafka, and we use S3 as a deep store. Data correction or backfilling of the data is done using an offline spark job. We use a TTL on the data stored in Pinot, which leads to an auto clean-up of older data while checking on the resources we need. If Pinot requires additional resources (for example, a new server), we introduce a new server, tag it with the appropriate tenant, and trigger a rebalance that would distribute the consuming and rolled out segments among the new servers.

Image for post

With this setup, we closed gaps in our data infrastructure and helped enable following capabilities into our product and our team:

  • Real time security insights & threat hunting platform: Our forensic capabilities match the speed of the attackers.
  • Query Speed: Engineers have seen complex API calls achieved within double digit milliseconds.
  • Deep Visualization: Designers were able to achieve a deep level of inspection and visualization of data.
  • Scalable and Manageable: Pinot segment management provided our DevOps team with a horizontally scalable system that can meet customer needs fast.

Conclusion

We started actively using Pinot a few months ago, and in this relatively short period, it has been part of multiple feature rollouts. Our experience so far has been great. We have found Pinot to be operationally simple to manage, scaling to our event ingestion rate. Indices made available by Pinot have been useful for us in experimenting and rolling out new features relatively quickly and at the same time meeting our latency needs. We just started using Pinot in our infrastructure and are excited about expanding usage in our product and inside Confluera.

We (Confluera) are passionate about security & infrastructure, if you would like to learn more about the product or see what Apache Pinot has enabled us to build into our system or just want to talk about cybersecurity, shoot out an email to demo@confluera.com

Intercept Threats. Before Damage.

Ready to experience the benefit of Confluera?
Start your 30-day trial and see for yourself how the latest innovation in detection and response can fend off the most advanced modern cyber attacks.
Like to learn more about Confluera?
Schedule a 30-min demo with one of our cybersecurity experts to learn how Confluera can help you identify and intercept cyber threats before it becomes a breach.