Using Rolling Deployments to Limit Service Disruptions

02 April 2018

Tags: java microservices

Please check out my latest blog post for New Relic where I discuss how we deploy our high throughput, backend microservices. Please give it a read and am interested in your thoughts on the subject!


A Gradle Plugin That Applies Itself

29 March 2018

Tags: gradle groovy

The plugin feature of Gradle makes it very easy to codify functionality for re-use between different projects. I personally like to use this functionality to provide common build and deploy functionality to many of my projects. Recently, I found myself creating a plugin that will generate a Markdown file containing all of the changes committed to Git, organized by release. When I finished creating the plugin, I realized that I also wanted to use the plugin in the plugin project itself so that I could keep a log of changes to the plugin (so meta!). However, releasing the plugin and including it as a dependency so it could be applied to itself seemed like a bad idea: it would forever be one release behind, assuming that I remembered to update the dependency version each time. That obviously is not a very maintainable solution. The answer? Apply the plugin to itself programmatically by taking advantage of some Groovy magic:

// Apply the changelog plugin to itself!
def classpath = [file('src/main/groovy').absolutePath, file('src/main/resources').absolutePath] as String[]
def pluginDescriptor = new Properties()
pluginDescriptor.load(new InputStreamReader(new FileInputStream(file('src/main/resources/META-INF/gradle-plugins/').absolutePath)))
apply plugin: new GroovyScriptEngine(classpath, this.getClass().getClassLoader()).loadScriptByName("${pluginDescriptor.getProperty('implementation-class').replaceAll('\\.', '/')}.groovy")

So, what is this code snippet doing exactly? The first line ensures that the source folders of the plugin project are part of the classpath that will be used to find and load the plugin. The second line simply creates a new Java Properties object, which will be used to hold the plugin’s descriptor file. The third line loads the Gradle plugin descriptor file so that we can determine the main class of the plugin. Finally, the last line uses the Groovy GroovyScriptEngine class to load the class referenced the plugin descriptor so that it can be applied by Gradle as a plugin. After placing this code in the plugin’s build.gradle file, we are now assured that the latest source in the plugin is applied to the plugin itself so that it can be used to generate the changelog for that project.


Choosing an Application Framework for Microservices

08 August 2017

Tags: java spring-boot microservices

Earlier this year, I authored a blog post for New Relic where I discussed how my team decided to use Spring Boot to build microservices. Recently, the post has been picked up by a bunch of other outlets and has been making the rounds on the interwebs. Please give it a read and am interested in your thoughts on the subject!


The Peculiar Case of the Kafka ConsumerIterator

20 December 2016

Tags: java kafka

This post is written using Apache Kafka 0.8.2

Creating a Apache Kafka client is a pretty straight-forward and prescriptive endeavor. What is not straight-forward or even expected is the behavior of the Iterator that is used to poll a Apache Kafka topic/partition for messages. More on this in a moment. First, let’s look at the typical setup to consume data from a Apache Kafka stream (for the sake of keeping this post brief, I am going to skip the details around creating and configuring a ConsumerConnector):

final Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap =
final List<KafkaStream<byte[], byte[]>> streams = consumerMap.get("topic");
final ConsumerIterator<byte[],byte[]> = streams.get(0).iterator();‍‍‍‍‍‍‍‍

With the ConsumerIterator in hand, the next step is to poll the Iterator for incoming messages:

while(iterator.hasNext()) {
    MessageAndMetadata<byte[], byte[]> message =;

‍ ‍This all seems pretty simple. Now, back to the issue with this code: the expectation is that this would check the Iterator for a message and if not present loop immediately and check again (standard Iterator behavior). However, this is not the case. The behavior of the ConsumerIterator is actually controlled by the configuration setting. This setting controls whether or not the Iterator “throw(s) a timeout exception to the consumer if no message is available for consumption after the specified interval”. By default, this value is set to -1, which means that the call to hasNext() will block indefinitely until a message is available on the topic/partition assigned to the consumer. The Java documentation for the Iterator interface does not specify whether or not the hasNext() method is allowed to block indefinitely, so its hard to say that the ConsumerIterator is violating the contract. However, this is certainly not the behavior anyone use to using the Iterator pattern in Java would expect, as collections typically don’t block until data is available in the data structure. If the configuration setting is set to a positive value, the consumption code would need to be modified to handle a ConsumerTimeoutException:

while(active) {
    try {
        if(iterator.hasNext()) {
            MessageAndMetadata<byte[],byte[]> message =;
    } catch(ConsumerTimeoutException e) {
        // Do nothing -- this means no data is available on the topic/partition

‍ Now, the call to hasNext() will behave more like an Iterator retrieved from a collection, which is to say it will not block indefinitely. It is recommended that you do some testing to determine an acceptable timeout value to avoid looping too frequently, as this will cause an increase in CPU utilization by the loop. It is also worth noting that the Kafka documentation does not directly link the configuration setting and the ConsumerIterator and this issue would most likely go unnoticed in scenarios where data is consistently available to the client. In any case, this issue highlights the need to take a deeper look at any API or library you include in your application in order to ensure that you understand exactly how it works and what performance impacts it may have on the execution of your code.


Older posts are available in the archive.