Enriched Configurable Routable Logging Support

Experience Report for Feature Request

Currently logging in practice with dgraph logging is limited to sending to stdout or stderr.

What you wanted to do

This is an enhancement request to have some these configurable options, such as:

  1. This is an enhancement request to have some configurable routing to direct logs to configurable paths for standard logs, error logs, query logs, debug logs, and access logs.
  2. Ability route logs to something other than files or stdout/stderr, such as syslog.
  3. The logging format should be configurable by the user, such as using a JSONFormatter or TextFormatter.
  4. Tailing timestamp format
  5. Error and Warnings should by default go to stderr and indicate they are an error or warning. In some cases, there are errors reported as info.

What you actually did

You need an external system to apply advanced regex patterns to filter and route logs, do compaction, retention, and rotation.

Why that wasn’t great, with examples

Currently, there’s a high barrier to use Dgraph in production systems, especially for enterprise features like ACL, where logging can quickly overload the disk and choke alpha node of disk resources.

This would be acceptable if there was an enterprise feature, e.g. logging vs. open-source logging, with exception of item 5 (errors should be flag as E not I).

Any external references to support your case

Many enterprise systems have the notion of having a separate error log and other logs, and a few have authorization logs for tacking logins events. Systems that face access through web endpoints, will also have access logs, as this is useful for security or discovery types of users accessing the system.

MongoDB

Neo4J

  • logs segregated into neo4j.log, debug.log, http.log, gc.log, query.log, security.log, service-error.log`
  • logs have expected categories of INFO, WARN, ERROR, and also DEBUG.
  • security and query logging are enterprise features.
  • ref. Logging - Operations Manual

Cassandra

  • Uses SLF4J (Simple Logging Facade for Java) with logback backend
  • log file retention, log file rotation, extended compaction logging
  • ref. Configuring logging

CockroachDB

I am new to this concept as we are just starting to think of logging in our UI and API (Dgraph GraphQL). We will most likely be using DataDog for the UI logging, if Dgraph could integrate with DataDog, that would be awesome. Again, just getting my feet wet here for now, so not much to add besides mentioning a service we will probably be using already.

Dgraph currently has some support for Datadog through the OpenCensus library. There’s a configuration option in with dgraph alpha -datadog.collector <string>. I have not yet tinkered with this feature.

How about this in dgraph --help?

--log_dir string If non-empty, write log files in this directory