gRPC over HTTP/2 or: How I learned to stop depending on REST and love gRPC

Jakub Pietrzyk

In today buzzword-oriented world, you don’t spend all of your precious time following all of the latest technologies. When creating APIs, you rarely think twice – obviously REST is your best friend. But have you ever heard about companies like Google or Dropbox? I bet you did. They’ve considered this problem in a different way. So did we, in our last project for a Berlin startup. We decided to use gRPC instead of REST and I would like to tell you why.

What is gRPC?

gRPC (Google Remote Procedure Call) is a high performance, language-agnostic RPC framework based on HTTP/2 protocol and protobufs (but not necessarily). Generally speaking, it allows client and server to communicate in a performant, transparent way. gRPC has been developed by Google, and it’s open source. https://github.com/grpc/

Why gRPC over RESTful API?

RESTful interface is hard
A messy combination of HTTP verbs, resources, URL identifiers, headers makes it complicated. And honestly speaking, have you ever seen an interface which conforms to all of RESTful principles? How many times something named ‘REST’ was just a regular HTTP interface underneath? After all, the goal is easy – just invoke some procedure on a different machine.

On the contrary, gRPC is easy to use and maintain thanks to :

  • Protobuf generator mechanism and RPC methods that together are able to generate client and server code out of the box,
  • Strongly typed messages which eliminate runtime errors,
  • Well-defined error API (no more HTTP statuses),
  • Easy versioning (based on descriptors and proto optional fields).

No API contract
Although gRPC uses protobuf as the standard for interface description language (IDL), you can use whatever IDL you wish, no matter if it’s JSON, XML or Thrift. Same goes for REST- you don’t have to but you usually assume that JSON is a standard way to transfer data.

For the sake of this blog post, let’s assume that gRPC == protobuf and REST == JSON as a data format. Additionally, let’s forget about WADL or Spring Cloud Contract as these are not part of the standard.

Imagine that there is a Person entity with following fields: name, id and email. Using protobuf it can look like this:

message Person {
  string name = 1;
  int32 id = 2;
  string email = 3;
}

So, what will happen if some day you need to remove email field because it’s no longer supported by your API? JSON doesn’t provide standard schema so you will encounter a lot of runtime errors while trying to send invalid data. And what about protobuf? It simply won’t compile – it’s that easy.

[ERROR] HelloWorldClient.java:[54,71] cannot find symbol
[ERROR] symbol:   method setEmail(java.lang.String)
[ERROR] location: class jlabs.helloworld.HelloRequest.Builder

Request streaming

REST-based APIs use a well-known request/response model— API consumers send requests to an API server and receive a response. There is no way to implement streaming without some third party technology e.g. Web Sockets.

On the opposite site, gRPC supports all possible streaming scenarios:

  • Client streaming: client writes sequence of messages as a stream instead of a single request (i.e. event driven architecture)
  • Server streaming: server sends a stream of responses as a response for user unary request (i.e. real time data like exchange rates),
  • Bidirectional streaming: client streams his data and at the same time server is able to send his data.

Performance
It’s hard not to mention two key features of gRPC:

Native HTTP/2 support
As of March 2019, more than 30% of the top 10 million websites supported HTTP/2. This standard is not backward compatible with any of its predecessors (HTTP/1.x) because of differences in how the data is formatted (frames) and transported (streams). The main goals for HTTP/2 were:

  • Binary framing: all of the HTTP/2 requests/responses are split into messages and frames, encoded in binary format (like protobuf),
  • Multiplexing: you don’t need multiple HTTP connections to achieve parallelism anymore. Using one connection per origin for multiple, parallel requests/responses is just enough. It allows you to lower page load and improve utilization of network resources,
  • Server push: you don’t have to request for additional resources (like CSS and JS) anymore. Just make a single call to a server, and it will push all of them,
  • Headers compression: before HTTP/2 headers were just a plain text. HTTP/2 compress request/response headers using HPACK compression which greatly reduce message size.

Protobuf
Protobuf is a recommended and extensible way of serializing structured data that empowers faster and simpler communication. In a nutshell, you define your message structure and then you are able to generate client and server code based on it. It’s available for almost any programming language – Java, JavaScript, C# and Python included.

How does it work? Just define a .proto file with some data as follows:

message Person {
  string name = 1; //strongly typed name
  uint32 id = 2; // variable-length encoding unsigned int

  enum PhoneType { //Enumeration type
    MOBILE = 0;
    HOME = 1;
    WORK = 2;
  }

  message PhoneNumber { // inner type
    string number = 1;
    PhoneType type = 2;
  }

  repeated PhoneNumber phone = 4; //list of phone numbers
}

As you can see .proto file is simple, it contains uniquely numbered, strongly typed fields with names and values. You can even define another message type inside (PhoneNumber) file, or use an import statement, which helps you structure your data.

Then, you can add a RPC service – this is a part of gRPC specification, it has nothing to do with protobuf itself.

Service PersonService{
	rpc SayHello (Person) returns (PersonReply)
}

We’ve defined Person message and person-related service, what else can we do with it? Compile it. As an output you’ll get both server and client code that is ready to use. You can see the client code for our example generated in Java.

Person person = Person.newBuilder().setName(name).setId(1).build();
PersonReply reply = blockingStub.sayHello(person)

Simple enough? I think so. Any other advantages? Think about payload written in XML – its actual size is 60 bytes:

<person>
    <name>Jakub P</name>
    <id>1</id>
</person>

 In comparison compiled protobuf Person is about 17 bytes. According to Protocol Buffers community, protobuf payloads are 3-10 times smaller and 20-100 times faster.

Our experience with gRPC

In j‑labs, we’ve been working with gRPC from the very beginning, using it as our main communication framework for both frontend and backend systems. During this time, we’ve created full end-to-end gRPC architecture in our polyglot environment using Python, Rust and JavaScript (web-grpc project). Moreover, we’ve incorporated gRPC compiler into Pants build system, adding support for Python and Rust. We’ve learned the hard way that gRPC isn’t that perfect, its main disadvantages are:

  • Difficult to debug (especially on the frontend) because of its binary format,
  • Lack of gRPC clients – Postman or Curl do not support gRPC yet,
  • Lack of gateways – using web-grpc (frontend implementation) requires you to choose between Envoy and Nginx (Envoy being the default choice),
  • Lack of HTTP/2 compatible load-balancers,
  • Most gRPC libraries are still under development (not officially released yet).
Conclusion

In my opinion, gRPC, or speaking more broadly RPC with some binary data representation like Protocol Buffers, is the future. Scaling down your data size and processor consumption can help you save a lot of $$$ on your cloud services. Moreover, strongly typed messages, client-server contract and ease of use will boost the development process just after your DevOps overcome all of the gRPC-related deployment problems connected with load-balancers and health-checks. Last but not least, companies like Google, Netflix, Docker and Cisco actually have adopted gRPC as a standard for communication between their microservices.

Poznaj mageek of j‑labs i daj się zadziwić, jak może wyglądać praca z j‑People!

Skontaktuj się z nami