
Back in the day...
When it used to be necessary to establish and use interfaces between remote IT systems, our heroes were called RPC, CORBA or SOAP. Later, the concepts of the WWW and the simplicity of the HTTP protocol and the associated good integration options in clients led to hype about RESTful APIs. These now form the de facto standard for web-based interfaces. But the tide is turning.
In recent years, the way in which data is required or made better available via web interfaces has changed significantly. Lean clients and more complex backend systems, which prepare data for clients, are increasingly giving way to lean backends and more complex clients. The key to the success of this approach is how clients can request or manipulate data. It plays a role here whether and how explicit interfaces must be defined.
tl; dr
We look at current alternatives to REST and their conceptual approaches. We chose the following candidates:
- Falcor — Minimize latency by providing all data in one model
- GraphQL — interface definition through query language with schema validation
- gRPC — simple, modern RPC framework
- Pure JSON — exchange data with JSON via websockets
Our focus is on describing the REST alternatives with their properties and use cases, not on comparing them with each other.
Falcor
Falcor It's not easy to describe it in one word: The protocol, data platform and middleware for web applications certainly apply. But so does asynchronous model-view-control (MVC) pattern implementation. Especially with the help of this asynchronous model — the M from MVC — a basic problem of modern applications is addressed: high latency when requesting data. The data is accessed using JavaScript operations such as get (), set (), call ()
etcetera
All in one model
On a node.js-enabled server, the relevant data from one or more data sources is stored as one virtual in-memory JSON graph — the model.json
— built and made available. In this way, all data relevant to a client can be processed in a single request against the model.json
instead of being provided with many individual requests in various data sources. If the data underlying the graph changes in the data sources, the JSON graph must be updated on the server side. The client is informed about this via a callback function. For its part, the client can use part of his model as Purged Mark (to clean) and request data from the server model again.

If a client requests data repeatedly, it is not fetched from the server, but from the local cache of the model. At best, the size of the model is physically limited. The complexity of distributed applications is therefore shifted from the client to the data supply of the model.
Where is the API here?
The answer lies in the structure of the data. If you know the data model, you also know the API.
var name = await model.getValue (“apis [0] .name”);
On the client side, Falcor uses JSON paths instead of URLs, as we know them from REST. There is no need for dedicated REST-like interfaces and their implementation in this form.
In the example above, we assume the existence of a JSON array Apis
in the Falcor model off that about the value Name
has. The challenge at Falcor is therefore to recognize how the available data is structured and Falcor unfortunately does not offer any support here — only communication with the provider of the data helps here. It can be even more difficult to anticipate the data types used in the model in order to be able to work with them effectively.
The good, the bad...
✅ Using JavaScript means, a client queries the relevant section of values on the JSON graph, as if it were available locally. Access to a remote-supplied Falcor data model is very similar. The client does not work directly on the model.json
of the server but on its model representation. There is one difference to local access: The Falcor model works asynchronously.
❌ In any case, Falcor has limits when it comes to parameterizing queries beyond indices and index ranges.
❌ A search for data in the Falcor model is not integrated.
✅ Falcor supports batching. This allows several potentially small requests to be combined into one request and transmitted together to the server.
✅ Falcor works with references within its JSON model. Instead of returning repetitive data to the client, this data is returned once in terms of content and when used elsewhere in the JSON with its reference.
✅ Falcor does not support the option to have the entire model returned to you when you make a request. A model that had a hundred nodes yesterday can finally contain a hundred million nodes today. It therefore makes perfect sense to return only what was actually requested when making an enquiry.
❌ Falcor comes with many built-in features. Security isn't one of them. Here, you depend on the possibilities in the JavaScript universe. With routers can individual paths be secured.
✅❌ Once the relevant data has ended up in the Falcor model, it can potentially be used without additional server-side implementation. We have to handle updates ourselves.
✅ If paths are to be treated specifically in the Falcor model, it is recommended to use routersclasses that can be used to influence/transform the returned data using JavaScript means.
Who is Falcor worthwhile for?
If there is a requirement to perform many distributed, reading operations on data, Falcor makes it relatively easy to summarize these operations in a meaningful way and thus reduce latency for complex queries. Writing access to a Falcor model is also provided.
Falcor does not seem to have an active community beyond the JavaScript implementation, which is unfortunately reflected in the perceived low distribution of Falcor. In addition to the JavaScript variant, there is a Java implementation, which is not regularly maintained, at least on Maven Central. Falcor for .NET has been orphaned for five years.
Falcor is easy to understand/learn and comes with most features for use. Die Falcor API In any case, it is well documented. Your own use cases can be based on many demos Easy to create at least a prototype for Falcor.
GraphQL — get what you expect
Provides a completely different approach to providing interfaces GraphQL. GraphQL is an API language on the one hand, a runtime environment and type system for this same API language both for reading and manipulating data.

The original form of the data provided via GraphQL servers — database or JSON graph — is only relevant for the server. For the client, the GraphQL server appears as a data source that has a single interface — here /graphql
— is available.
And no: GraphQL has nothing to do with graph databases, even though it is for Neo4J — the top dog among graph databases — a GraphQL integration There is. A distinctive feature of GraphQL is the structure of the request and the corresponding response — both are the same. In response to the request written in GraphQL notation
query APINames {
apis {
Name
}
}
I get the names of all entries in the array as an answer Apis
as JSON.
{
“data”: {
“APIs”: [
{"name”: “GraphQL"},
{"name”: “Falcor"}
]
}
}
The structure of the request contains the expected structure of the response. Am I only interested in individual elements of Apis
, can I the corresponding id
— if available in this form — include with the request
query apiName ($id: Id) {
APIs (id: $id) {
Name
}
}
and receive at id=1
For example, the following answer:
{
“data”: {
“apis”: {
“name”: “GraphQL”
}
}
}
In the vast majority of cases, RESTful APIs are linked to the use of HTTP. With GraphQL, it is possible to choose between HTTP and Websockets to choose.
And where is the API here?
Similar to Falcor, the client determines which data should be delivered in which format. Through parameterizable queries, exactly the desired amount of data can be provided.
Who wouldn't want that: Information from the interface as to which queries/types/parameters are actually possible. GraphQL's type system already provides this via the __scheme
field at the root of each interface with. The request
{
__scheme {
types {
Name
}
}
}
delivers, for example:
{
“data”: {
“__scheme”: {
“types”: [
{"name”: “String"},
{"name”: “ID"},
...
]
}
}
}
You can therefore get an idea of which data types are being used even before using a GraphQL interface. This feature of GraphQL is Introspection called (introspection).
The good, the bad...
✅ The structure and types of data are defined beyond doubt via request — via interface. The GraphQL server knows exactly in which form requested data is expected.
✅ The data model on a GraphQL server can be built from several data sources. The data from these data sources is given a potentially hierarchical structure on the server, which makes access easier. For the client, the GraphQL server appears as a data source.
✅❌ The server-side implementation for a request must be created differently from Falcor itself, for example. Everyone has to find out for themselves whether this is good or bad.
❌ Since all requests to a GraphQL interface potentially end up on the same URL, typical security measures such as URL filters on the Web Application Firewall are ineffective at best. The GraphQL scheme, which is public by default, must explicitly prevent unauthorized access sheltered become.
✅ However, using the same URL also has advantages: Reporting, monitoring and tracing become easier.
✅❌ Using the GraphQL schema to validate request and response costs running time. Lots of running time. However, the validation rules can be overwritten and thus implicitly minimized. In this way, you can apply your own rules that check that the client complies with the interface contract.
✅ Similar to Falcor, multiple requests to a GraphQL server can be combined and processed together. The latency of requests can be minimized in a similar way to Falcor.
✅ Search functions can be built with GraphQL — unlike in Falcor.
✅ Parameterizing requests is easy with GraphQL (see example above).
Who is GraphQL worthwhile for?
The GraphQL type system, its query language and the associated runtime environment allow flexible requests from clients to a GraphQL server without an explicit change to the relevant interface — unlike what is often the case with RESTful APIs. It doesn't matter whether the client accesses are read or write. There are established frameworks for using GraphQL by WebClients, such as Relay per single-page applications.
GraphQL has a broad, active community and server and client implementations for many programming languages and frameworks. Java, PHP, Python, Go, or C# implementations are just the most typical examples of their kind.
On the one hand, GraphQL's learning curve is somewhat steeper than that of Falcor and, depending on the language/framework used, additional libraries must be integrated. On the other hand, the many additional libraries in particular offer a rich wealth of available functionality, which makes it easy to solve/integrate many challenges as well as GraphQL options.
gRPC
gRPC is a RPC Framework focused on scale and throughput. With gRPC, a client application invokes a method of an application running on a remote server. As if this remote application is locally on the same machine as the client. The client is aware of the signature of the method with parameters and return values. Up to this point, we also know the concept from RPC

On the server side — i.e. the remote application — there is a gRPC server running, which implements the interface of the method above. On the client side, there is a gRPC stub for the method interface, with which the client application actually communicates locally.
Where is the API here?
The interface used by gRPC client and server is described as standard with gRPC with Protocol Buffers than Interface Definition Language (IDL) and transmission protocol. The interface is used in a .proto
-Text file saved. With the help of protoc The language-specific code to be used by gRPC client and server is generated.
service exampleService {
rpc hallOrpc (hallOrRequest) returns (hallOreply) {}
}
message hallOrRequest {
string name = 1;
}
message hallorePly {
string message = 1;
}
Unlike Falcor and GraphQL, there is therefore an explicit API to be defined.
The good, the bad...
✅❌ There is an explicit contract that must be defined and fulfilled by client and server. If I rather only have read-only access and do not have to explicitly protect my data, this means an overhead compared to Falcor, for example. Everyone has to find out for themselves whether that speaks for or against gRPC.
✅ easy to learn — protobuf is easy to understand
✅ relatively platform-independent or language-independent
✅ Messages are transferred instead of resources + verbs (REST)
✅ gRPC can — via plug-in — to gateway-Functionality can be extended
✅ While GraphQL and Falcor are more suitable for hierarchical data, gRPC does not have this limitation. At gRPC, we also have the power to incorporate search and parameterization capabilities ourselves.
❌ Although gRPC is considered a fast protocol due to its low overhead, it does not by itself support for reducing latency by aggregating requests, as in Falcor using the one-model approach. If my network connection is busy, my gRPC application is potentially slow.
Who is gRPC worthwhile for?
gRPC provides services with an easy way to exchange data between different environments and locations. Load balancing, logging, tracing or health checks can be configured via a plug-in. The focus of gRPC is more on server-to-server communication. gRPC interfaces provided for a frontend seem rather unusual.
gRPC is used by more programming languages supported as GraphQL, e.g. Java, PHP, Python, Go, C# and C++. However, the prevalence of gRPC — both among organizations and among contributors — is lower than that of GraphQL. The gRPC gateway at least offers the approach of building an API economy without additional tools. Whether this is actually sufficient depends on the particular context.
The gRPC learning curve is due to the simplicity of gRPC and assistance in generating interface artifacts by protoc
very steep. It only takes a few minutes until the first running example with most of the basic gRPC concepts is ready.
Pure JSON — Request, Response, Confirm
Pure JSON is based on the idea of moving between client and server primarily with the help of Websocketprotocol and using JSON to communicate as a data format.
Websockets are described in a simplified way Peer-to-peer-Connections between client and server. A websocket connection starts as an HTTP (S) request and is “upgraded” by client and server. Once established, a connection remains in place until it is sent by the client or server via close ()
is closed. This results in the known from HTTP overhead avoided for re-establishing connections after a request has been completed. Oh yes: With HTTP, a server cannot simply transfer data to a client, but only at the client's request. Websockets also do not have this limitation. Both partners may use send ()
transfer data.

We don't have to say a lot about JSON as a data exchange format; we do about interfaces described only with JSON means. The idea behind Pure JSON is to use verbs that contain CRUDoperations correspond to: create
, Retrieve
(instead of Read
), update
and delete
and in addition flush
to notify the server that the client has deleted data from its memory. Conversely, the server does not have to provide the client with flush
Communicate. As a response to one of the above operations, the recipient delivers, for example, in the case of create CREATED
or CREATED_FAIL
back.
The response status is described using log nodes in JSON with log level, return codes, and messages.
...
//Log
{log_table: [
{code_key: “400”,
code_str: “Bad request”,
level_int: 3, /* is equivalent to error */
level_str: “error”,
log_id: “42”,
user_msg: “ID is missing”
}
],
...
}
...
Instead of the request/response known from the REST world, it is recommended to use Request/Response/Confirm. The client acknowledges the server using Confirm thus the successful processing of received data. In this way, the server can track the client's status and receive confirmation for an operation, for example after time-consuming processing of data by the client.
Where is the API here?
The API is defined by the following message structure, which should be used for all messages in exactly this form:
//Request
{action_str: “retrieve”, /* operation */
data_type: “APIexample”, /* Application-specific */
log_table: [/* log information, rather not in request */],
request_map: {/* Request parameter/payload */},
trans_map: {/* meta information such as API version */}
}
//(Indirect) Response
{action_str: “RETRIEVED”, /* In response to retrieve */
data_type: “APIexample”, /* Application-specific */
log_table: [/* Log information */],
response_map: {/* payload */},
trans_map: {/* meta information */}
}
//Confirm
{action_str: “done”, /* Receipt to server: I'm done! */
data_type: “APIexample”, /* Application-specific */
log_table: [/* Log information */],
confirm_map: {/* payload */},
trans_map: {/* meta information */}
}
When using websockets, data is generated on both sides with send ()
transferred. The client and server must be designed in such a way that the verbs, data types, and parameters transmitted in this format can be processed.
An indirect response takes place when data has changed on the server and is transferred to the client without the client having explicitly requested the update.
The good, the bad...
✅❌ JSON per se is followed by no one Document Type Definition such as XML. If necessary, here you can JSON schema Remedy the situation.
✅ The concept behind Pure JSON is relatively simple and easy to understand. It's just JSON after all.
❌ We're not aware of any real Pure JSON application. Pure JSON currently seems to be more of a concept than a solution.
✅ Several pure JSON messages can be combined and transmitted as a JSON array.
✅ Pure JSON is not bound to a protocol: whether websockets, HTTP, or SMTP is used plays only a minor role.
❌ The validation of a pure JSON interface and the data transferred with it must be carried out by the client and the server itself.
✅ JSON is language and platform-independent by definition.
✅ It primarily uses messages instead of resources à la REST.
Who is Pure JSON worth it for?
Pure JSON APIs are not bound to websockets. Instead of Websockets, you can use HTTP, for example, and only the POST Operation, e.g. to be consistent with Websocket send ()
to be. That is definitely not RESTful, but that's exactly what this blog was about.
Where real-time transmission of data plays a role, web sockets are a means of choice because the latency required to establish a connection is saved and thus enables real streaming of data, for example.
The one based on Node.JS socket.io is probably the most common websocket implementation. alternatives to socket.io can be found in many programming languages such as Java, Python, PHP, Go or Scala. None of the implementations are tied to using JSON.
Pure JSON APIs are potentially applicable in any language that understands JSON. Getting to grips with the idea is definitely worthwhile.
Finally...
Whom the limitations from REST, e.g.
- strict use of resources
- verbs that do not always match, PUT vs. POST
- Response codes are not always suitable
- complex debugging (verb, response code, payload, header, embedded error messages, etc.)
pain, can deal with the alternatives described in this post and check whether they meet the existing framework conditions in the respective context better than REST.