Using fasthttp to make API requests in Golang

When I recently started a new project using Go, I decided to give fasthttp a try. fasthttp has been holding its ground for the last few years as the fastest server for Go on the TechEmpower benchmarks.1 Since I was already using fasthttp’s server functionality, I figured I would evaluate how well its client would work for me to make requests to an external API.

A Somewhat Realistic Example

What would it look like to retrieve a gzipped response from a JSON API using fasthttp? The code below is my best recommendation, including basic error handling:


func ExampleGetGzippedJsonWithFastHttp() {
    req := fasthttp.AcquireRequest()
    defer fasthttp.ReleaseRequest(req)
    req.SetRequestURI("https://httpbin.org/json")
    // fasthttp does not automatically request a gzipped response.
    // We must explicitly ask for it.
    req.Header.Set("Accept-Encoding", "gzip")

    resp := fasthttp.AcquireResponse()
    defer fasthttp.ReleaseResponse(resp)

    // Perform the request
    err := fasthttp.Do(req, resp)
    if err != nil {
        fmt.Printf("Client get failed: %s\n", err)
        return
    }
    if resp.StatusCode() != fasthttp.StatusOK {
        fmt.Printf("Expected status code %d but got %d\n", fasthttp.StatusOK, resp.StatusCode())
        return
    }

    // Verify the content type
    contentType := resp.Header.Peek("Content-Type")
    if bytes.Index(contentType, []byte("application/json")) != 0 {
        fmt.Printf("Expected content type application/json but got %s\n", contentType)
        return
    }

    // Do we need to decompress the response?
    contentEncoding := resp.Header.Peek("Content-Encoding")
    var body []byte
    if bytes.EqualFold(contentEncoding, []byte("gzip")) {
        fmt.Println("Unzipping...")
        body, _ = resp.BodyGunzip()
    } else {
        body = resp.Body()
    }

    fmt.Printf("Response body is: %s", body)
}

In the example, we start by acquiring request and response instances from fasthttp’s pools. Pooling these instances allows re-use for later requests, thereby avoid the performance penalty of allocating new buffers every time we want to make a request. We explicitly add an Accept-Encoding: gzip header to tell the API that we can handle a compressed response, since fasthttp will not add this header for us. After ensuring the response returned with status 200 OK and JSON content, we check the Content-Encoding header to determine whether or not the response body needs to be decompressed. If compressed, we can use the convenient response.BodyGunzip() method, but an uncompressed response should be read with response.Body().

In contrast, doing the same thing with the built-in net/http package saves a couple lines of code because, as long as we don’t override it, the default Transport will automatically request and decompress a gzipped response. If we manually added an Accept-Encoding request header, the default Transport detects this and disables automatic decompression. In the example below, notice how compression is entirely implicit.


func ExampleGetGzippedJsonWithNetHttp() {
    req, _ := http.NewRequest(http.MethodGet, "https://httpbin.org/json", nil)
    // The built-in net/http Transport automatically requests a gzipped response
    // and also automatically unzips it for us in the body.
    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        fmt.Printf("Client get failed: %s\n", err)
        return
    }
    defer resp.Body.Close()

    if resp.StatusCode != fasthttp.StatusOK {
        fmt.Printf("Expected status code %d but got %d\n", fasthttp.StatusOK, resp.StatusCode)
        return
    }

    // Verify the content type
    contentType := resp.Header.Get("Content-Type")
    if strings.Index(contentType, "application/json") != 0 {
        fmt.Printf("Expected content type application/json but got %s\n", contentType)
        return
    }

    // Note that we haven't needed to check for gzip compression
    body, _ := ioutil.ReadAll(resp.Body)

    fmt.Printf("Response body is: %s", body)
}

The extra few lines of code in the fasthttp example illustrate how the package aims to be explicit about possible performance bottlenecks. While it makes it easy to handle compressed responses, it still forces us to consider if it’s worth the extra CPU cycles (and lines of code) to manually enable compression.

Performance

Speaking of a focus on performance, how does the fasthttp client compare to the net/http client? The GitHub README claims a ten-fold advantage. However, after reading through the client benchmarking code, the TCP benchmarks didn’t seem quite right: they were comparing a fasthttp client connecting to a fasthttp server with a net/http client connecting to a net/http server. If we just want to know client performance, then the tests should be connecting to the same server.

I wrote several benchmarks to compare the clients when connecting to the same server. The results weren’t nearly as dramatic as claimed:


$ go test -bench='OverTCP' -benchmem -benchtime=10s
goos: windows
goarch: amd64
pkg: github.com/davidbacisin/fasthttp-request-perf
BenchmarkNetHttpClientOverTCPToFastHttpServer-8     77619    152802 ns/op    3512 B/op    44 allocs/op
BenchmarkFastHttpClientOverTCPToFastHttpServer-8    88010    128418 ns/op       2 B/op     0 allocs/op

The fasthttp client was only 1.13 times faster than the net/http client! Nonetheless, the fasthttp client stuck to its word in minimizing memory allocations. For comparison, I also ran the original benchmarks on my machine:


$ go test -bench="EndToEnd100TCP" -benchmem -benchtime=10s
goos: windows
goarch: amd64
pkg: github.com/valyala/fasthttp
BenchmarkClientGetEndToEnd100TCP-8           90073    120276 ns/op     166 B/op     0 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100TCP-8    78714    144092 ns/op    5934 B/op    65 allocs/op

These results were not appreciably different from my own benchmarks: 1.14 times faster. My concern about using two different servers wasn’t nearly as important as I had thought. Instead, I suspected that Windows’ limitations on TCP connections diluted the performance gains, and indeed, CPU profiling2 revealed that about 75% of the time was cumulatively spent in system calls for sending over TCP. I can only assume that the original author ran the benchmarks on a platform that is much more performant when it comes to TCP.

What if we eliminated the TCP bottleneck, similar to the DoFastServer tests in the original benchmarks? To test this, I replaced the Dial function in each client with, effectively, direct function calls. I used a channel to synchronize writing to and reading from the fake connection, but ultimately this setup eliminated any network or system calls. The “connection’s” Read method copies the response data directly into the request buffer.

With these changes in place, the fasthttp client is dramatically faster than net/http client, by over a factor of 8.


$ go test -bench="MockServer" -benchmem -benchtime=10s
goos: windows	
goarch: amd64
pkg: github.com/davidbacisin/fasthttp-request-perf
BenchmarkNetHttpClientToMockServer-8                        2987875    4091 ns/op    3418 B/op    40 allocs/op 
BenchmarkFastHttpClientWithManagedBuffersToMockServer-8    24990039     468 ns/op       0 B/op     0 allocs/op 

The original benchmarks (DoFastServer) show about a 12x boost over net/http on my machine. It’s a similar order of magnitude, so I’ll chalk it up to subtle differences in what is actually being measured.

Customizing fasthttp to your needs

Now that we’ve established the fasthttp client can be significantly faster than the net/http client when not limited by system bottlenecks, how do we go beyond my original example for actually using fasthttp?

As I mentioned earlier, fasthttp provides Request and Response pools as a way to avoid expensive memory allocations by re-using old buffers. When you release requests and responses back to their pools, those buffers can be re-acquired later. As a result, the buffers only need to be allocated once, and perhaps occasionally resized.


func ExampleGetWithFastHttpManagedBuffers() {
	url := "https://golang.org/"

	// Acquire a request instance
	req := fasthttp.AcquireRequest()
	defer fasthttp.ReleaseRequest(req)
	req.SetRequestURI(url)

	// Acquire a response instance
	resp := fasthttp.AcquireResponse()
	defer fasthttp.ReleaseResponse(resp)

	err := fasthttp.Do(req, resp)
	if err != nil {
		fmt.Printf("Client get failed: %s\n", err)
		return
	}
	if resp.StatusCode() != fasthttp.StatusOK {
		fmt.Printf("Expected status code %d but got %d\n", fasthttp.StatusOK, resp.StatusCode())
		return
	}
	body := resp.Body()

	fmt.Printf("Response body is: %s", body)
}

Note that we must be sure not to use the response buffer after it has been released back to the pool in order to avoid data races. That leaves us with three options:

That last option is a non-starter because it allocates memory when a main advantage of fasthttp is being able to avoid allocating memory. Fortunately, we can easily provide our own buffers to fasthttp and take full control over when to allocate, pool, and release those buffers.


func ExampleGetWithSelfManagedBuffers() []byte {
	url := "https://golang.org/"

	var body []byte // This buffer could be acquired from a custom buffer pool

	statusCode, body, err := fasthttp.Get(body, url)
	if err != nil {
		fmt.Printf("Client get failed: %s\n", err)
		return nil
	}
	if statusCode != fasthttp.StatusOK {
		fmt.Printf("Expected status code %d but got %d\n", fasthttp.StatusOK, statusCode)
		return nil
	}

	fmt.Printf("Response body is: %s", body)

	return body
}

Furthermore, managing our own buffers also enables us to limit the memory usage of our entire application and be more particular about when buffers are freed. Internally, fasthttp uses Go’s sync.Pool and valyala’s bytebufferpool, neither of which offer memory limits or customizable management rules. In fact, the sync.Pool documentation warns that items might be deallocated automatically at any time.

I’ll stick to the easy Request and Response pools until profiling or a unique use case indicates I should go to the extra work of managing my own buffers.

Caveats and Conclusion

fasthttp is not a drop-in replacement for net/http, and that is by design. It combines clever instruction optimizations with careful memory management, so to use it effectively, you’ll need to embrace pre-allocated or pooled buffers. Switching an existing project from net/http to fasthttp could be a challenge that yields disappointing performance improvements, especially if the performance bottleneck is elsewhere (such as TCP connections). Implementation of HTTP/2.0 is moving slowly, whereas net/http has supported the protocol for several years. And fasthttp is supported by a small community, which should always be considered for maintainability.

Despite its limitations, the focus on performance is compelling. My project gives me room to experiment, so I’m going to stick with using fasthttp for both server and client.

The code for these examples and benchmarks is available on my GitHub.

  1. Note that atreugo outperforms fasthttp for some of the TechEmpower benchmarks, but atreugo is actually a web framework build on top of fasthttp.
  2. For more information on how to profile Go applications, see the blog post at https://blog.golang.org/profiling-go-programs.