Found 1 old transactions. Acting to abort them

Hello dear friends

I have a strange problem with the dgraph for some time

I run 1 zero and 1 alpha and the dgraph starts working

a few hours later , Alpha begins a process that cannot be completed

I get this error in it:

 draft.go:1195] Found 1 old transactions. Acting to abort them.
 draft.go:1156] TryAbort 1 txns with start ts. Error: <nil>
 draft.go:1172] TryAbort: No aborts found. Quitting.
 draft.go:1198] Done abortOldTransactions for 1 txns. Error: <nil>
 log.go:34] Got compaction priority: {level:0 score:1 dropPrefix:[]}
 log.go:34] Running for level: 0
 log.go:34] LOG Compact 0->1, del 6 tables, add 1 tables, took 830.788054ms
 log.go:34] Compaction for level: 0 DONE
 draft.go:1195] Found 1 old transactions. Acting to abort them.
 draft.go:1156] TryAbort 1 txns with start ts. Error: <nil>
 draft.go:1172] TryAbort: No aborts found. Quitting.
 draft.go:1198] Done abortOldTransactions for 1 txns. Error: <nil>
 log.go:34] Got compaction priority: {level:0 score:1 dropPrefix:[]}
 log.go:34] Running for level: 0

I waited for hours but it still didn’t work. I stopped and restarted Alpha, but it still continues to process

I stop the services and delete the Alpha and Zero folder and its data and import the export I got and the problem is fixed, but a few hours later the same problem is repeated again.

I have used different versions, but the same error is repeated
versions :
1.2.3
1.2.6
2.03.3
20.03.6

There’s no issue in your logs. You can ignore those processes. The abort task is a background task that will always run. Besides this, are you getting something else?

1 Like

Hi @MichelDiz
thanks for your answer
But the process is such that dgraph do not respond to any request. Neither to ratel nor to the golang client

After many hours, the problem still does not go away

In some cases, the use of RAM increases so much that the consumption of RAM reaches a maximum and the dgraph crashes.

Also, dgraph starts generating files in the alpha folder, so that in some cases a 30 GB file is generated in less than 30 minutes.

What are you doing? loading massive data?

No
We have an educational site with 2,000 users

In our site, educational processes are followed, including tests, assignments, meetings, etc.

Within hours of the day, the number of database transactions reaches 100 per second

What is your deploy implementation? docker? bare metal? k8s?
Also, what are the specs?

It is written with golang

use this packages in my dg client

"github.com/dgraph-io/dgo/v2"
"github.com/dgraph-io/dgo/v2/protos/api"
 grpc "google.golang.org/grpc"

zero service config:

{
  "my": "localhost:5080",
  "wal": "/db/z0/zw"
}

and alpha service conf:

{
	"my": "localhost:7080",
	"zero": "localhost:5080",
	"lru_mb": 2048,
	"postings": "/db/a0/p",
	"wal": "/db/a0/w"
}

But are you using docker? bare metal? k8s? what are the specifications of those machines?

If you don’t wanna have issues with Dgraph day to day administration you can use the hosted service Slash https://dgraph.io/slash-graphql - it can forward the DQL GRPC service for you.

I work on ubuntu 18.04 server

the specifications of those machines:
CPU: 6Core
Ram: 24G

What do you mean by that? “But are you using docker? bare metal? k8s?”

Ii used Automatic download according to:
https://dgraph.io/docs/deploy/download/#automatic-download

So you have a bare metal machine running Dgraph binary?

  1. Move to the latest version of Dgraph.
  2. Remove the lru_mb option value. (optional)
  3. If you have multiple hard disks you could setup multiple Alphas and each one could use a disk/SSD.

Multiple Alphas are good to spread the load among the cluster.

  1. You can use best-effort queries https://dgraph.io/docs/clients/raw-http/#running-best-effort-queries and best-effort queries RAFT - Design concepts

I see, it is really a bare-metal machine.