How did I improve latency by 700% using sync.Pool

We write superfast backends with at max 30-40ms turn-around time from web-service. We continuously try to reduce money spent per request. This blog enlists a few of our findings.

Precautionary warning: It’s a long post with a lot of code. Comment if I am unclear, abstract or vague at any point.

Why to use sync.Pool?

sync.Pool [1] comes really handy when one wants to reduce the number of allocations happening during the course of a functionality written in Golang. A Pool is a set of temporary objects that may be individually saved and retrieved. fasthttp [2], zerolog [3] are couple of those most popuplar open source Golang libraries which uses sync.Pool at the core of their implementation.

How to use sync.Pool?

Consider the following example:

import (

// Pool for our struct A
var pool *sync.Pool

// A dummy struct with a member 
type A struct {
	Name string

// Func to init pool
func initPool() {
	pool = &sync.Pool {
		New: func()interface{} {
			fmt.Println("Returning new A")
			return new(A)

// Main func
func main() {
	// Initializing pool
	// Get hold of instance one
	one := pool.Get().(*A)
	one.Name = "first"
	fmt.Printf("one.Name = %s\n", one.Name)
	// Submit back the instance after using
	// Now the same instance becomes usable by another routine without allocating it again


I picked one of the most commonly used API from our web-service which

  • Performs 4 cache queries
  • Sends 5 separate events onto Kafka
  • Calls 2 separate micro-services synchronously

I ran a load test of 10-rps on my local machine for 10 seconds first on pooled-flow and then on non-pooled flow.

These latencies are higher because this service was running in Pune (India) office and communicating with other microservices running in Oregon (United States) :).

For pooled version

Requests      [total, rate]            100, 10.10
Duration      [total, attack, wait]    10.189033566s, 9.899999979s, 289.033587ms
Latencies     [mean, 50, 95, 99, max]  241.952809ms, 252.716311ms, 476.400585ms, 491.370124ms, 503.938792ms
Bytes In      [total, mean]            8400, 84.00
Bytes Out     [total, mean]            0, 0.00
Success       [ratio]                  100.00%
Status Codes  [code:count]             200:100
Error Set:

For non-pooled version

Requests      [total, rate]            100, 10.10
Duration      [total, attack, wait]    13.28824244s, 9.899999931s, 3.388242509s
Latencies     [mean, 50, 95, 99, max]  880.57307ms, 325.619739ms, 3.476523432s, 3.685206241s, 3.888228236s
Bytes In      [total, mean]            8400, 84.00
Bytes Out     [total, mean]            0, 0.00
Success       [ratio]                  100.00%
Status Codes  [code:count]             200:100


  • We get almost 700x performance boost. (95th percentile)
  • For most of the objects, the ratio of allocation to reuse was almost 10-15 times. Which means the object was allocated probably 5 times and being used almost 50-70 times, during the course of this benchmark.

Precautions to be taken while using sync.Pool


What if you pass a pooled object to a go-routine and submit it back to the pool before go-routine yet to complete the execution?

This is a pickle, once you put back the object, it’s ready for reuse. So before you putting back the object inside the pool, you have to be sure that nothing else is using the same object for any other operation.

Example code -

Let’s consider this flow in a more practical use case

Here is the typical flow for a Web-Service API.

Let’s assume that our controller is using pooled struct to read body of the incoming request.

  1. Go server accepts the request, spawns a go-routine and calls your controller. Typically a func like func HandleRequest(w http.ResponseWriter, r *http.Request).
  2. Let’s consider it’s a POST API call, and we read the body of the request, map it to a struct A.
  3. Now through the course of flow, we might do some of the operations on the object in async manner, in a separate go-routine.
  4. At the end of controller functionality, we put back the object into the pool for reuse.

Refer the following diagram showing the flow:


The issue here is, what if Async Call 1 takes more time and before it completes it’s functionality, your controller has written the response, and object is put back into the pool!


The only solution to it is manual reference counting for the pooled objects. When I started googling about the solution, I came across a nicely written blog here[4].

Basic flow for Reference Counting will be:

  • Increment the counter at the time of Acquiring the object
  • Decrement the counter once the routine is done using it
  • Increment the counter before passing the object inside a go-routine or channel
  • Decrement the counter once the routine is done with processing the object
  • An object is only put back into the pool when the reference count is zero.


This implementation is highly inspired from the hydrogen18 blog, but has some modifications which are necessary for a practical web app.

We will have an interface to have a blueprint of how a Reference Coutable object will be implemented

// Interface following reference countable interface
// We have provided inbuilt embeddable implementation of the reference countable pool
// This interface just provides the extensibility for the implementation
type ReferenceCountable interface {
	// Method to increment the reference count
	// Method to decrement reference count
  • IncrementReferenceCount to increase the reference count of the current object
  • DecrementReferenceCount to decrease the reference count of the current object

We will write a struct that implements this interface, which can be embedded in our own structs.

// Struct representing reference
// This struct is supposed to be embedded inside the object to be pooled
// Along with that incrementing and decrementing the references is highly important specifically around routines
type ReferenceCounter struct {
	count       *uint32                 `sql:"-" json:"-" yaml:"-"`
	destination *sync.Pool              `sql:"-" json:"-" yaml:"-"`
	released    *uint32                 `sql:"-" json:"-" yaml:"-"`
	Instance    interface{}             `sql:"-" json:"-" yaml:"-"`
	reset       func(interface{}) error `sql:"-" json:"-" yaml:"-"`
	id          uint32                  `sql:"-" json:"-" yaml:"-"`

// Method to increment a reference
func (r ReferenceCounter) IncrementReferenceCount() {
	atomic.AddUint32(r.count, 1)

// Method to decrement a reference
// If the reference count goes to zero, the object is put back inside the pool
func (r ReferenceCounter) DecrementReferenceCount() {
	if atomic.LoadUint32(r.count) == 0 {
		panic("this should not happen =>" + reflect.TypeOf(r.Instance).String())
	if atomic.AddUint32(r.count, ^uint32(0)) == 0 {
		atomic.AddUint32(r.released, 1)
		if err := r.reset(r.Instance); err != nil {
			panic("error while resetting an instance => " + err.Error())
  • Reference counter specifically does the job of thread safe reference counting
  • Along with that, it puts back the instance associated with the counter into the pool as soon as the reference count becomes zero
  • Reset function ideally resets all the members of the instance of an object.
  • This is necessary in most of the practical use cases
    • Consider the following struct
      type A struct{
          First string `json:"f"`
          Second string `json:"s"`
          Third int `json:"t"`
    • Suppose that while using for the first time we got a JSON like following
          "f" : "one",
          "t" : 1,
    • We do the processing and put back the object inside the pool.
    • Now we got the second request with following JSON
          "s" : "second",
    • Technically the resulting struct should have First as “”, Third as 0. But as we are using pooled objects, First will be “one” and Third will be 1.
    • Hence we use a Reset function that reset all the members of the object before puting it back in the pool.

And finally the pool, Reference coutable pool

// Struct representing the pool
type referenceCountedPool struct {
	pool       *sync.Pool
	factory    func() ReferenceCountable
	returned   uint32
	allocated  uint32
	referenced uint32

// Method to create a new pool
func NewReferenceCountedPool(factory func(referenceCounter ReferenceCounter) ReferenceCountable, reset func(interface{}) error) *referenceCountedPool {
	p := new(referenceCountedPool)
	p.pool = new(sync.Pool)
	p.pool.New = func() interface{} {
		// Incrementing allocated count
		atomic.AddUint32(&p.allocated, 1)
		c := factory(ReferenceCounter{
			count:       new(uint32),
			destination: p.pool,
			released:    &p.returned,
			reset:       reset,
			id:          p.allocated,
		return c
	return p

// Method to get new object
func (p *referenceCountedPool) Get() ReferenceCountable {
	c := p.pool.Get().(ReferenceCountable)
	atomic.AddUint32(&p.referenced, 1)
	return c

// Method to return reference counted pool stats
func (p *referenceCountedPool) Stats() map[string]interface{} {
	return map[string]interface{}{"allocated": p.allocated, "referenced": p.referenced, "returned": p.returned}
  • ReferenceCountedPool expects a factory method that returns ReferenceCoutable instance i.e. an object implementing ReferenceCountable. Here in this example we will be embedding ReferenceCounter which will suffice this condition.

Now start using the ReferenceCoutedPool as following:

Consider following struct Event

// Struct representing an event
type Event struct {
	pool.ReferenceCounter `sql:"-"`

	Name      string    `json:"name"`
	Log       string    `json:"log"`
	Timestamp time.Time `json:"timestamp"`

// Method to reset event
func (e *Event) Reset() {
	e.Name = ""
	e.Log = ""
	e.Timestamp = 0

// Method to reset Event
// Used by reference countable pool
func ResetEvent(i interface{}) error {
	obj, ok := i.(*Event)
	if !ok {
		errors.New("illegal object sent to ResetEvent", i)
	return nil

// Method to create new event
func NewEvent(name, log string, timestamp time.Time) *Event {
	e := AcquireEvent()
	e.Name = name
	e.Log = log
	e.Timestamp = timestamp
	return e

And finally the Event pool

// Event pool
var eventPool = pool.NewReferenceCountedPool(
	func(counter pool.ReferenceCounter) pool.ReferenceCountable {
		br := new(Event)
		br.ReferenceCounter = counter
		br.Instance= br
		return br
	}, ResetEvent)

// Method to get new Event
func AcquireEvent() *Event {
	return eventPool.Get().(*Event)

Basic usage:

e := models.NewEvent("test", "this is a test log", time.Now())
defer e.DecrementReferenceCount()


This entire method seems a bit of an overhead, and requires a lot more precision while coding. You miss Reference counting at one place and it could result in a lot of unwanted results. But this works like a charm (see the benchmarks) once you get it right.

Let me know your opinions, doubts, arguments here or at akshaymdeo[at] Happy coding \m/


  • [1]
  • [2]
  • [3]
  • [4]

Golang: Returning errors with context

Usually in Golang, it is recommended that, while propagating errors outside, just return the error do not log it. This blog by Dave Cheney talks in length about how shall we handle logging and errors. But while returning/propagating errors, somtimes it becomes necessary to add the context of the error along with the actual error.

Basic example is:

Consider your web app is talking to another gRPC server for getting location information through a function called, GetLocationFor(user). Now there could be long enough function call tree that lands us onto this function. So if something goes wrong with gRPC connection, and if we return the error as is, technically we have lost the context.

func GetLocationFor(u *User) (*Location,error){
respMsg, err := grpcClient.GetLocationFor(
if err != nil{
// here we directly send the error
return nil, err
// process the respMsg and move on

So there is one better way to just return the error (without logging it) and keeping the context:

Consider following supporting method for creating error objects

func New(args ...interface{}) error {
var err error
var rawData []interface{}
for _, arg := range args {
switch arg.(type) {
case error:
err = arg.(error)
log.Println("error", err)
rawData = append(rawData, arg)
if err == nil {
err = errors.New(fmt.Sprintf("%v", rawData))
return errors.New(fmt.Sprintf("%v [error => %s]", rawData, err.Error()))

And use it as


func GetLocationFor(u *User) (*Location,error){
respMsg, err := grpcClient.GetLocationFor(
if err != nil{
// here we directly send the error
return nil, errors.New("while getting location from grpc client in GetLocationFor", err)
// process the respMsg and move on

After this, whenver you log the error it will be something like while getting location from grpc client in GetLocatioFor [error => <original_error_message_from_grpc_client>]

This keeps the context of the error very specific and makes it easier to pinpoint the exact issue.

Happy coding \m/

Shipping your Android SDK anytime on live devices

As per the new Play store guidelines, this method is categorized as an illegal way of executing any functionality on user’s device. I personally won’t recommend this method anymore.

If your product depends upon a mobile SDK, then you must be knowing the real pain of shipping the latest version of your SDK onto the live devices through host apps. There is a way to tackle this issue with an interesting approach.

We are going to use dynamic class loading using classloaders provided by Android system.

Class loader

The Java classloader is a part of the Java Runtime Environment that dynamically loads Java classes into the Java Virtual Machine. Usually, classes are only loaded on demand. The Java runtime system does not need to know about files and file systems because of classloaders.[2]

We are gonna use a special class loader from Android system called PathClassLoader, which provides a simple ClassLoader implementation that operates on a list of files and directories in the local file system, but does not attempt to load classes from the network. Android uses this class for its system class loader and for its application class loader(s).

Chaining the ClassLoaders


So we can create a hierarchy of classloaders that can share the definition of the classes without any duplication. So in this case, classloader A has already loaded class A, so loaderB.loadClass('A') will delegate the request to loaderA instead of reloading it. This feature comes handy for our purpose.

Application lifecycle on Android OS


When a user clicks on the app icon, a new application thread is started which contains a new instance of the dalvik vm. This dalvik vm has a classloader that loads the dex file (APK file) and kicks off the lifecycle of the app.

What if we provide one more dex to load at runtime?

Consider the AAR file that we ship, contains one more DEX file which is loaded by the AAR at the time of initialization. This will allow us to change the functionality without having to force update the AAR on the devices.

Structure of the project

app : Host app where you will integrate the SDK

dependencies {
compile project(":lib")

lib : Our static SDK code that ships with the app. This has the secret sauce of loading the functionality runtime.

dynamiclib : SDK that does the implementation of the dynamic functionality of the SDK.

Let’s divide our tasks and decode one by one.

Code architecture

To bring in the dynamic features, I am going with an interface approach. So our dynamic lib and lib will share one common interface.

    public interface IMethods {
String getVersion();
void log(String message);

Make sure that the package name for this interface will be the exactly same in both lib and dynamiclib module.

This IMethods is implemented by a class called MethodsImpl in dynamic lib.

public class MethodsImpl implements IMethods {
private static final String TAG = "##MethodsImpl##";

@Override public String getVersion() {
return "0.1.1";

@Override public void log(String message) {
Log.d(TAG, message);

Now build this dynamiclib to an APK (as we need a dex file), and host it. I use SimpleHttpServer.

python -m SimpleHTTPServer 8000

This makes dynamic lib available on a link.

Let’s build the last leg of the solution, downloading the APK and loading the classes using a PathClassLoader.

Downloading the APK

Use any way to download the APK, I have used an AsyncTask

Store the downloaded APK on internal storage sandboxed for our package

context.getFilesDir() gives the path to the internal sandboxed storage for the package. Store the downloaded APK in this folder.

Load the downloaded APK and cast it to the IMethods
// INTERNAL_DEX_PATH = context.getFilesDir() + FILE_NAME this is path to the file we have downloaded
private static void loadSdk(Context context) {
PathClassLoader pathClassLoader =
new PathClassLoader(INTERNAL_DEX_PATH, context.getClassLoader());
try {
Log.d(TAG, "loading sdk class");
// This step load the implementation of IMethods dynamically
Class sdkClass = pathClassLoader.loadClass("");
// This step creates the new instance of MethodsImpl and casts it to IMethods
IMethods methods = (IMethods) sdkClass.newInstance();
// This should log this message on logcat
methods.log("testing this");
} catch (Exception e) {

Logs when you run the app is

12-22 17:16:29.036 10926-10926/ D/## Injector ##: load()
12-22 17:16:29.131 10926-10926/ W/gralloc_ranchu: Gralloc pipe failed

[ 12-22 17:16:29.132 10926:10926 D/ ]
HostConnection::get() New Host Connection established 0xad9f6d80, tid 10926
12-22 17:16:29.174 10926-10954/ D/NetworkSecurityConfig: No Network Security Config specified, using platform default
12-22 17:16:29.222 10926-10954/ D/## Injector ##: File output stream is => /data/user/0/
12-22 17:16:29.523 10926-10953/ I/OpenGLRenderer: Initialized EGL, version 1.4
12-22 17:16:29.526 10926-10953/ D/OpenGLRenderer: Swap behavior 1
12-22 17:16:30.064 10926-10926/ D/## Injector ##: downloaded
12-22 17:16:30.064 10926-10926/ D/## Injector ##: /data/user/0/
12-22 17:16:30.064 10926-10926/ D/## Injector ##: /data/user/0/
12-22 17:16:30.065 10926-10926/ D/## Injector ##: /data/user/0/
12-22 17:16:30.170 10926-10926/ D/## Injector ##: loading sdk class
12-22 17:16:30.171 10926-10926/ D/##MethodsImpl##: testing this

This approach will allow you to update the functionality with a strong interface defined between shipped code and dynamic part of the SDK. This is the case where host app keeps interacting with your SDK using a set of functions. If your SDK is, initialize once and forget, then you can keep entire logic in dynamic part of the SDK.


Security is going to be one of the biggest concern in this approach. There are some standard ways to validate the dex file like using MD5 hashes. Once you securely download the DEX file in the internal storage then other concerns are as same as they are for shipped SDK.

Github repo for the source code.

Get in touch with me if you need any help or you find something wrong with this post. Happy coding :).


[1] Android system fundamentals

[2] ClassLoaders Wiki

[3] [How do classloaders work]((

[4] Android Application lifecycle

How to use packages specifically for Debug/Release builds in Android


Currently I am working on an Android app for one of the most interesting startups in Fintech. I have been really choosy about the packages that are getting shipped with this app, simply because it involves a lot of money related functionalties. During the development, I came across a requirement that debug builds should have instabug integrated for reporting UI issues easily. APK size matters a lot, so I wanted to achieve this without shipping Instabug SDK in production builds.

How to do this?

Gradle file

dependencies {
compile appDependencies.rateUs
compile appDependencies.markdownJ
debugCompile(appDependencies.instabug) {
exclude group: 'com.mcxiaoke.volley'

I changed the way gradle compiles instabug dependency. Now it’s done during debug builds only. Release builds will not consider this dependency.

How to use this conditional dependency in code?

Create debug and release folders inside the src folder of your app. Create the exact same package structure (same as main folder) in both of these folders. Create a class with overriding nature i.e. same name and methods. Now write add instabug initialization inside the debug flavour, while release flavour won’t have this bit. Update your main Application class to include the initialization of this newly created class.

Application class in main.

public void onCreate() {

And you are done, now the instabug will be only compiled with the debug builds and not with your production builds.

Let me know if there are some issues/suggestions related to this post. Happy coding \m/

Docker + Golang web apps using Godep on AWS with Elastic Beanstalk using CodeShip

This post is pretty old, and I no longer use any of the tech mentioned in this post and won’t recommend anyone to use. (Except Go, which I still love the most, probably more)

A lot of things in one title right :D. We are coming up with a small tool for developers that is going to help them to distribute mobile application releases easily, and mainly during development phase. Initially we were using the free dyno provided by Heroku, as staging environment. But as the date of release is approaching, it was time to move onto more scalable (heroku is scalable but becomes a bit costly when you start using it for production purpose) and cheaper infrastructure.

Golang web application structure

My web application is based on slightly customised version of Gin + PostgreSQL at the backend, AngularJS + Bootstrap 3 + Custom CSS for front end. I have been using Godep for dependency management on Heroku. And I feel, this is one of the better ways than creating a bash script with all the go get. If any of the dependencies have introduced some changes that breaks the existing apps, then you are kind of fucked. Godep basically works on the commit id of the code that you have checked out while development.

Note 1: Also a lot of people on the forums suggested me to go for precompiled binary to reduce the size of the docker images. I haven’t given a try to it. You have to use a few ready made tools to create binary for your production environment.

Note 2: If you are not sure which ORM you should use,then give a try to Gorm.

First thing first, Install boot2docker

As you will first test everything locally, you will need boot2docker. And just initialise it with following commands

boot2docker init
boot2docker start

Creating a Dockerfile

# Using wheezy from the official golang docker repo
FROM golang:1.4.1-wheezy

# Setting up working directory
WORKDIR /go/src/<project>
Add . /go/src/<project>/

# Get godeps from main repo
RUN go get

# Restore godep dependencies
RUN godep restore

# Install
RUN go install <project>

# Setting up environment variables

# My web app is running on port 8080 so exposed that port for the world
ENTRYPOINT ["/go/bin/<project>"]

Most of the things in this Dockerfile are self explanatory (List of commands).

Golang team is kind enough to create the debian images with the latest Golang versions available. So the first line means that I intend to run this in a container with OS Debian(wheezy) with Golang 1.4.1.

What about the DB connection ?

So in my DEV environment I had a postgres instance running locally. I struggled a bit for allowing my docker container to connect the database on the host but it was taking a lot of time. So for a quick fix, I created a new app on heroku, added a free postgres addon.

Building the repository

So we have a Dockerfile, now the next step is to build the repository for the container and running it. Your working directory is the root of your project.

docker build -t <project_name> .

You will see all of the commands that we have added in the Docker file will start executing one by one. At the end if everything goes well you shall see Successfully built <ID>

If you see this message then we are all set to deploy project on the cloud. For testing you can publish the repository with following command and test it from local browser

docker run --publish 3000:8080 --name <tag-name> --rm <project-name>

Check your docker container ip address using following command

boot2docker ip

And now go to your browser and enter address as http://ip_address:3000 and everything should work.

Setting up continous integration

I went through a couple of CI tools supporting Golang web apps and settled down onto Codeship. (I am evaluating Shippable).

For my Go projects, I don’t keep up with the regular naming conventions i.e. So I had to work a bit on the test scripts as Codeship assumes you are following the naming conventions. (The reason I don’t follow it because, the project is not supposed to be reused as it is by anyone else :))

mv ../distribute ${HOME}/src/distribute
cd ${HOME}/src/distribute
go get
godep restore
go get

And then you can put your testing commands. I am not using any framework for writing tests. Default testing package is more than enough for me at this stage.

go test <module_to_test> -v

Setting up the deployment hook

Codeship comes up with Elastic beanstalk deployment hook. For that you have to configure a few things before you can use it.

Configuring ElasticBeanstalk environment

Create IAM user for codeship

I keep on using different users for these different services, So that we can manage the access independently. I created a user with all access for Elastic Beanstalk. There is a ready template for this in AWS IAM panel.

Create a new application

Create a new application in elastic beanstalk and don’t select a demo application option. You will see a Create Application button on right top of the panel. For this you will have to upload a zip of your source code manually.

Note : I am also using ELB in front of my EC2 instances. For https, I am doing the SSL termination at LB level, and then redirecting it to my Golang application.

Create a new environment

Create a new environment inside your new application.

Configure codeship deployment hook

Once you have configured AWS, go to codeship and put all the information in AWS Beanstalk deployment settings page.


If all goes well then the entire pipeline is ready. For testing make a small change and push it to your configured branch. You will see that the new task in the codeship with the exact status of it :).