Architecting Cloud Native Applications
上QQ阅读APP看书,第一时间看更新

Solution

Implement one or more consumers that in aggregate consume all events from all streams and store the unaltered, raw events in highly durable blob storage. The consumers are optimized to store the events in batches. Batching increases throughput to ensure that the consumers keep pace with event volumes, it minimizes the potential for errors that could result in the loss of data, and it optimizes the objects for later retrieval. The objects are stored with the following path: stream-name/yyyy/mm/dd/hh.

It is critical to monitor these consumers for errors, alert on their iterator age, and take timely action in the case of an interruption. These consumers are all important, as any other consumer can be repaired from the events in the data lake so long as the data lake has successfully consumed the events. The data lake's blob storage should be replicated to a separate account and region for disaster recovery.

Create a program to replay events to a specific consumer. A generic program would read the events from a specified path, filter for specified event types, and send the events to the specified consumer. Specialized programs can be created as needed.

Optionally, implement consumers to store events in a search engine, such as Elasticsearch, for indexing and time series analytics. The index can be leveraged to perform ad hoc analysis, such as investigating an incident, which is usually a precursor to replaying a specified set of events. The index can also be used to create analytics dashboards.