SSL error while trying to export from dgraph

Hi I have a dgraph community edition using docker (installed using docker-compose). Now I was trying to connect to the alpha node and export the data in my local. It says connection reset error although I confirmed that it’s running on that endpoint.

import json

import logging

import pydgraph

# from trialsai.statements.utils import new_dql_client

 

logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)

#Creation of dql client code

from gql import Client, gql

from gql.transport.requests import RequestsHTTPTransport

 

transport = RequestsHTTPTransport(

    url="http://172.31.0.209:9081/admin/", verify=True, retries=3,

)

client = Client(transport=transport, fetch_schema_from_transport=False)

 

export_mutation =gql( """

      mutation {

            export(input: {

                format: "rdf"

                destination: "./"

            }) {

                response {

                message

                code

                }

            }

            }

 

    """)

 

result = client.execute(export_mutation)

 

# res=dql_client.txn.query(export_mutation)

print(result)

Can someone explain what am I doing wrong. Also dgraph has open access as we have white listed incoming traffics.
Here’s the error:

requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.31.0.209', port=9081): Max retries exceeded with url: /admin/ (Caused by ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))

Hey @Orneyfish,

The error doesn’t look SSL related. Anyway, usually these errors can be solved first by validating a net route to the Docker container. Have you tried:

curl http://172.31.0.209:8080/graphql
1 Like

Thanks @matthewmcneely I think I have got it.
Also I want to add a trailing question here. I have set up my dgraph using docker and then amzon eks cluster with a alb. Now can you tell me how can I export the archived data from it? I tried pinging the /admin endpoint which was able to archive inside the container but in my usecase I want to take the archived data to my local machine.
I have used this code:



import json

import logging

import pydgraph
import time

query = """mutation {

            export(input: {

                format: "rdf"

            }) {

                response {

                message
                code

                }


            }

            }"""


check_status_query = """query {
          task(input: {id: "$$id$$"}) {
              status
              lastUpdated
              kind
          }
      }"""




logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)

#Creation of dql client code

from gql import Client, gql

from gql.transport.requests import RequestsHTTPTransport



transport = RequestsHTTPTransport(

    url="http://172.31.0.209:4002/admin", verify=True, retries=3,

)

client = Client(transport=transport, fetch_schema_from_transport=False)



export_mutation =gql(query)



def poll_export_status(client, export_mutation):

  """Polls the Dgraph instance for the status of the export operation.

  Args:
    client: A GQL client object.
    export_mutation: A GQL mutation object for the export operation.

  Returns:
    A string representing the status of the export operation.
  """

  result = client.execute(export_mutation)
  print("result from the status is: ", result)
  export_status = result["task"]["status"]

  return export_status

# Start the export operation.
flag = True
result = client.execute(export_mutation)
print("result from the execution is: ", result)
if "message" in result["export"]["response"]:
  print("Export operation completed.")
  taskId = result["export"]["response"]["message"].split("ID ")[-1]
  print("taskId is: %s", taskId)
# Poll the Dgraph instance for the status of the export operation.

while flag:
  check_status = gql(check_status_query.replace("$$id$$", taskId))
  export_status = poll_export_status(client, check_status)

  if export_status.lower() == "success":

    print("Export operation complete.")

    flag = False
  if export_status.lower() == "failed":
    print("Export operation in failed. Status: {}".format(export_status))
    flag = False
  else:

    print("Export operation in progress. Status: {}".format(export_status))
    
    time.sleep(10)


I have tried this code and this works but it archives inside the alpha node. Now I will be hosting this though eks for a small POC. I want to archive the data into a local machine from the hosted dgrpah instance.
Can you help me out here?

You’ll need to mount a volume in your EKS setup. As for moving that exported data from that EKS volume to your local machine, a Google search yielded: azure aks - How to copy files from kubernetes Pods to local system - Stack Overflow

Thanks a lot, matthew was a great help!