Blueflood: Simple Metrics Processing
Track:
search Speaker(s):
The Challenge
Rackspace needed a metrics system that could ingest 30 million signals generated from the Cloud Monitoring system. It had to offer custom data retention levels and still be able to offer graphs to customers in real time.
How We Did It
We created a distributed system of shared-nothing nodes that split the responsibilities of:
- ingesting data,
- processing rollups and
- servicing data points for reads.
Depending on the need, nodes can be easily reconfigured to support all or some of those functions.
What You Will Learn
- How putting the user experience first drove requirements for this system.
- How we leveraged excess processing power on existing storage hardware so that no additional hardware was required for this service.
- Techniques for scheduling rollups and still maintain numerical accuracy.
- How we handle non-numerical data points.
- How we utilized open-source technology (Apache Cassandra, Scribe, Thrift, and Node.js) to deliver relatively quickly.
About the speaker:
An Apache Cassandra committer and PMC member, Gary Dusbabek is a life-long programmer specializing in distributed systems. His past experience includes working with large-scale text and image indexes in the newspaper industry and high-volume advertisement booking software. Recent work at Rackspace includes working on Cassandra full-time and being a founding member of the Cloud Monitoring team. Gary current works on the Rackspace Service Registry.
Schedule info
Time slot:
3 June 16:50 - 17:35
Room:
Kesselhaus - Login to post comments