One place for hosting & domains


      How To Build a GraphQL API With Golang to Upload Files to DigitalOcean Spaces

      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.


      For many applications, one desirable feature is the user’s ability to upload a profile image. However, building this feature can be a challenge for developers new to GraphQL, which has no built-in support for file uploads.

      In this tutorial, you will learn to upload images to a third-party storage service directly from your backend application. You will build a GraphQL API that uses an S3-compatible AWS GO SDK from a Go backend application to upload images to DigitalOcean Spaces, which is a highly scalable object storage service. The Go back-end application will expose a GraphQL API and store user data in a PotsgreSQL database provided by DigitalOcean’s Managed Databases service.

      By the end of this tutorial, you will have built a GraphQL API using Golang that can receive a media file from a multipart HTTP request and upload the file to a bucket within DigitalOcean Spaces.


      To follow this tutorial, you will need:

      • A DigitalOcean account. If you do not have one, sign up for a new account. You will use DigitalOcean’s Spaces and Managed Databases in this tutorial.

      • A DigitalOcean Space with Access Key and Access Secret, which you can create by following the tutorial, How To Create A DigitalOcean Space and API Key. You can also see product documentation for How to Manage Administrative Access to Spaces.

      • Go installed on your local machine, which you can do by following our series, How to Install and Set Up a Local Programming Environment for Go. This tutorial used Go version 1.17.1.

      • Basic knowledge of Golang, which you can gain from our How To Code in Go series. The tutorial, How To Write Your First Program In Go, provides a good introduction to the Golang programming language.

      • An understanding of GraphQL, which you can find in our tutorial, An Introduction To GraphQL.

      Step 1 — Bootstrapping a Golang GraphQL API

      In this step, you will use the Gqlgen library to bootstrap the GraphQL API. Gqlgen is a Go library for building GraphQL APIs. Two important features that Gqglen provides are a schema-first approach and code generation. With a schema-first approach, you first define the data model for the API using the GraphQL Schema Definition Language (SDL). Then you generate the boilerplate code for the API from the defined schema. Using the code generation feature, you do not need to manually create the query and mutation resolvers for the API as they are automatically generated.

      To get started, execute the command below to install gqlgen:

      • go install

      Next, create a project directory named digitalocean to store the files for this project:

      Change into the digitalocean project directory:

      From your project directory, run the following command to create a go.mod file that manages the modules within the digitalocean project:

      Next, using nano or your favorite text editor, create a file named tools.go within the project directory:

      Add the following lines into the tools.go file as a tool for the project:

      // +build tools
       package tools
       import _ "" 

      Next, execute the tidy command to install the gqlgen dependency introduced within the tools.go file:

      Finally, using the installed Gqlgen library, generate the boilerplate files needed for the GraphQL API:

      Running the gqlgen command above generates a server.go file for running the GraphQL server and a graph directory containing a schema.graphqls file that contains the Schema Definitions for the GraphQL API.

      In this step, you used the Gqlgen library to bootstrap the GraphQL API. Next, you’ll define the schema of the GraphQL application.

      Step 2 — Defining the GraphQL Application Schema

      In this step, you will define the schema of the GraphQL application by modifying the schema.graphqls file that was automatically generated when you ran the gqlgen init command. In this file, you will define a User, Query, and Mutation types.

      Navigate to the graph directory and open the schema.graphqls file, which defines the schema of the GraphQL application. Replace the boilerplate schema with the following code block, which defines the User type with a Query to retrieve all user data and a Mutation to insert data:


      scalar Upload
      type User {
        id: ID!
        fullName: String!
        email: String!
        img_uri: String!
        DateCreated: String!
      type Query {
        users: [User]!
      input NewUser {
        fullName: String!
        email: String!
        img_uri: String
        DateCreated: String
      input ProfileImage {
        userId: String
        file: Upload
      type Mutation {
        createUser(input: NewUser!): User!
        uploadProfileImage(input: ProfileImage!): Boolean!

      The code block defines two Mutation types and a single Query type for retrieving all users. A mutation is used to insert or mutate existing data in a GraphQL application, while a query is used to fetch data, similar to the GET HTTP verb in a REST API.

      The schema in the code block above used the GraphQL Schema Definition Language to define a Mutation containing the CreateUser type, which accepts the NewUser input as a parameter and returns a single user. It also contains the uploadProfileImage type, which accepts the ProfileImage and returns a boolean value to indicate the status of the success upload operation.

      Note: Gqlgen automatically defines the Upload scalar type, and it defines the properties of a file. To use it, you only need to declare it at the top of the schema file, as it was done in the code block above.

      At this point, you have defined the structure of the data model for the application. The next step is to generate the schema’s query and the mutation resolver functions using Gqlgen’s code generation feature.

      Step 3 — Generating the Application Resolvers

      In this step, you will use Gqlgen’s code generation feature to automatically generate the GraphQL resolvers based on the schema that you created in the previous step. A resolver is a function that resolves or returns a value for a GraphQL field. This value could be an object or a scalar type such as a string, number, or even a boolean.

      The Gqlgen package is based on a schema-first approach. A time-saving feature of Gqlgen is its ability to generate your application’s resolvers based on your defined schema in the schema.graphqls file. With this feature, you do not need to manually write the resolver boilerplate code, which means you can focus on implementing the defined resolvers.

      To use the code generation feature, execute the command below in the project directory to generate the GraphQL API model files and resolvers:

      A few things will happen after executing the gqlgen command. Two validation errors relating to the schema.resolvers.go file will be printed out, some new files will be generated, and your project will have a new folder structure.

      Execute the tree command to view the new files added to your project.

      tree *

      The current directory structure will look similar to this:


      go.mod go.sum gqlgen.yml graph ├── db.go ├── generated │   └── generated.go ├── model │   └── models_gen.go ├── resolver.go ├── schema.graphqls └── schema.resolvers.go server.go tmp ├── build-errors.log └── main tools.go 2 directories, 8 files

      Among the project files, one important file is schema.resolvers.go. It contains methods that implement the Mutation and Query types previously defined in the schema.graphqls file.

      To fix the validation errors, delete the CreateTodo and Todos methods at the bottom of the schema.resolvers.go file. Gqlgen moved the methods to the bottom of the file because the type definitions were changed in the schema.graphqls file.


      package graph
      // This file will be automatically regenerated based on the schema, any resolver implementations
      // will be copied through when generating and any unknown code will be moved to the end.
      import (
      func (r *mutationResolver) CreateUser(ctx context.Context, input model.NewUser) (*model.User, error) {
          panic(fmt.Errorf("not implemented"))
      func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {
          panic(fmt.Errorf("not implemented"))
      func (r *queryResolver) User(ctx context.Context) (*model.User, error) {
          panic(fmt.Errorf("not implemented"))
      // Mutation returns generated.MutationResolver implementation.
      func (r *Resolver) Mutation() generated.MutationResolver { return &mutationResolver{r} }
      // Query returns generated.QueryResolver implementation.
      func (r *Resolver) Query() generated.QueryResolver { return &queryResolver{r} }
      type mutationResolver struct{ *Resolver }
      type queryResolver struct{ *Resolver }
      // !!! WARNING !!!
      // The code below was going to be deleted when updating resolvers. It has been copied here so you have
      // one last chance to move it out of harms way if you want. There are two reasons this happens:
      //  - When renaming or deleting a resolver the old code will be put in here. You can safely delete
      //    it when you're done.
      //  - You have helper methods in this file. Move them out to keep these resolver files clean.
      func (r *mutationResolver) CreateTodo(ctx context.Context, input model.NewTodo) (*model.Todo, error) {
       panic(fmt.Errorf("not implemented"))
      func (r *queryResolver) Todos(ctx context.Context) ([]*model.Todo, error) {
       panic(fmt.Errorf("not implemented"))

      As defined in the schema.graphqls file, Gqlgen’s code generator created two mutations and one query resolver method. These resolvers serve the following purposes:

      • CreateUser: This mutation resolver inserts a new user record into the connected Postgres database.

      • UploadProfileImage: This mutation resolver uploads a media file received from a multipart HTTP request and uploads the file to a bucket within DigitalOcean Spaces. After the file upload, the URL of the uploaded file is inserted into the img_uri field of the previously created user.

      • Users: This query resolver queries the database for all existing users and returns them as the query result.

      Going through the methods generated from the Mutation and Query types, you would observe that they cause a panic with a not implemented error when executed. This indicates that they are still auto-generated boilerplate code. Later in this tutorial, you will return to the schema.resolver.go file to implement these generated methods.

      At this point, you generated the resolvers for this application based on the content of the schema.graphqls file. You will now use the Managed Databases service to create a database that will store the data passed to the mutation resolvers to create a user.

      Step 4 — Provisioning and Using a Managed Database Instance on DigitalOcean

      In this step, you will use the DigitalOcean console to access the Managed Databases service and create a PostgreSQL database to store data from this application. After the database has been created, you will securely store the details in a .env file.

      Although the application will not store images directly in a database, it still needs a database to insert each user‘s record. The stored record will then contain links to the uploaded files.

      A user’s record will consist of a Fullname, email, dateCreated, and an img_uri field of String data type. The img_uri field contains the URL pointing to an image file uploaded by a user through this GraphQL API and stored within a bucket on DigitalOcean Spaces.

      Using your DigitalOcean dashboard, navigate to the Databases section of the console to create a new database cluster, and select PostgreSQL from the list of databases offered. Leave all other settings at their default values and create this cluster using the button at the bottom.

      Digitalocean database cluster

      The database cluster creation process will take a few minutes before it is completed.

      After creating the cluster, follow the Getting Started steps on the database cluster page to set up the cluster for use.

      At the second step of the Getting Started guide, click the Continue, I’ll do this later text to proceed. By default, the database cluster is open to all connections.

      Note: In a production-ready scenario, the Add Trusted Sources input field at the second step should only contain trusted IP addresses, such as the IP Address of the DigitalOcean Droplet running the application. During development, you can alternatively add the IP address of your development machine to the Add Trusted Sources input field.

      Click the Allow these inbound sources button to save and proceed to the next step.

      At the next step, the connection details of the cluster are displayed. You can also find the cluster credentials by clicking the Actions dropdown, then selecting the Connection details option.

      Digitalocean database cluster credentials

      In this screenshot, the gray box at right shows the connection credentials of the created demo cluster.

      You will securely store these cluster credentials as environment variables. In the digitalocean project directory, create a .env file and add your cluster credentials in the following format, making sure to replace the highlighted placeholder content with your own credentials:



      With the connection details securely stored in the .env file, the next step will be to retrieve these credentials and connect the database cluster to your project.

      Before proceeding, you will need a database driver to work with Golang’s native SQL package when connecting to the Postgres database. go-pg is a Golang library for translating ORM (object-relational mapping) queries into SQL Queries for a Postgres database. godotenv is a Golang library for loading environment credential from a .env file into your application. Lastly, go.uuid generates a UUID (universally unique identifier) for each user’s record that will be inserted into the database.

      Execute this command to install these:

      • go get

      Next, navigate to the graph directory and create a db.go file. You will gradually put together the code within the file to connect with the Postgres database created in the Managed Databases cluster.

      First, add the content of the code block into the db.go file. This function (createSchema) creates a user table in the Postgres database immediately after a connection to the database has been established.


      package graph
      import (
      func createSchema(db *pg.DB) error {
          for _, models := range []interface{}{(*model.User)(nil)}{
              if err := db.Model(models).CreateTable(&orm.CreateTableOptions{
                  IfNotExists: true,
              }); err != nil {
          return nil

      Using the IfNotExists option passed to the CreateTable method from go-pg, the createSchema function only inserts a new table into the database if the table does not exist. You can understand this process as a simplified form of seeding a newly created database. Rather than creating the Tables manually through the psql client or GUI, the createSchema function takes care of the table creation.

      Next, add the content of the code block below into the db.go file to establish a connection to the Postgres database and execute the createSchema function above when a connection has been established successfully:


      import (
            // ...
      func Connect() *pg.DB {
          DB_PASSWORD := os.Getenv("DB_PASSWORD")
          DB_PORT := os.Getenv("DB_PORT")
          DB_NAME := os.Getenv("DB_NAME")
          DB_ADDR := os.Getenv("DB_ADDR")
          DB_USER := os.Getenv("DB_USER")
          connStr := fmt.Sprintf(
          opt, err := pg.ParseURL(connStr); if err != nil {
          db := pg.Connect(opt)
          if schemaErr := createSchema(db); schemaErr != nil {
          if _, DBStatus := db.Exec("SELECT 1"); DBStatus != nil {
              panic("PostgreSQL is down")
          return db 

      When executed, the exported Connect function in the code block above establishes a connection to a Postgres database using go-pg. This is done through the following operations:

      • First, the database credentials you stored in the root .env file are retrieved. Then, a variable is created to store a string formatted with the retrieved credentials. This variable will be used as a connection URI when connecting with the database.

      • Next, the created connection string is parsed to see if the formatted credentials are valid. If valid, the connection string is passed into the connect method as an argument to establish a connection.

      To use the exported Connect function, you will need to add the function to the server.go file, so it will be executed when the application is started. Then the connection can be stored in the DB field within the Resolver struct.

      To use the previously created Connect function from the graph package immediately after the application is started, and to load the credentials from the .env file into the application, open the server.go file in your preferred code editor and add the lines highlighted below:

      Note: Make sure to replace the existing srv variable in the server.go file with the srv variable highlighted below.


       package main
      import (
      const defaultPort = "8080"
      func main() {
           err := godotenv.Load(); if err != nil {
           log.Fatal("Error loading .env file")
        // ...
           Database := graph.Connect()
           srv := handler.NewDefaultServer(
                               Resolvers: &graph.Resolver{
                                   DB: Database,
        // ...

      In this code snippet, you loaded the credentials stored in the .env through the Load() function. You called the Connect function from the db package and also created the Resolver object with the database connection stored in the DB field. (The stored database connection will be accessed by the resolvers later in this tutorial.)

      Currently, the boilerplate Resolver struct in the resolver.go file does not contain the DB field where you stored the database connection in the code above. You will need to create the DB field.

      In the graph directory, open the resolver.go file and modify the Resolver struct to have a DB field with a go-pg pointer as its type, as shown below:


      package graph
      import ""
      // This file will not be regenerated automatically.
      // It serves as dependency injection for your app, add any dependencies you require here.
      type Resolver struct {
          DB *pg.DB

      Now a database connection will be established each time the entry server.go file is run and the go-pg package can be used as an ORM to perform operations on the database from the resolver functions.

      In this step, you created a PostgreSQL database using the Managed Database service on DigitalOcean. You also created a db.go file with a Connect function to establish a connection to the PostgreSQL database when the application is started. Next, you will implement the generated resolvers to store data in the PostgreSQL database.

      Step 5 — Implementing the Generated Resolvers

      In this step, you will implement the methods in the schema.resolvers.go file, which serves as the mutation and query resolvers. The implemented mutation resolvers will create a user and upload the user’s profile image, while the query resolver will retrieve all stored user details.

      Implementing the Mutation Resolver Methods

      In the schema.graphqls file, two mutation resolvers were generated. One with the purpose of inserting the user’s record, while the other handles the profile image uploads. However, these mutations have not yet been implemented as they are boilerplate code.

      Open the schema.resolvers.go file. Modify the imports and the CreateUser mutation with the highlighted lines to insert a new row containing the user details input into the database:


      package graph
      import (
      func (r *mutationResolver) CreateUser(ctx context.Context, input model.NewUser) (*model.User, error) {
           user := model.User{ 
               ID:          fmt.Sprintf("%v", uuid.NewV4()), 
               FullName:    input.FullName, 
               Email:       input.Email, 
               ImgURI:      "", 
               DateCreated: time.Now().Format("01-02-2006"), 
           _, err := r.DB.Model(&user).Insert(); if err != nil { 
               return nil, fmt.Errorf("error inserting user: %v", err) 
           return &user, nil 

      In the CreateUser mutation, there are two things to note about the user rows inserted. First, each row that is inserted is given a UUID. Second, the ImgURI field in each row has a placeholder image URL as the default value. This will be the default value for all records and will be updated when a user uploads a new image.

      Next, you will test the application that has been built at this point. From the project directory, run the server.go file with the following command:

      Now, navigate to http://localhost:8080 through your web browser to access the GraphQL playground built-in to your GraphQL API. Paste the GraphQL Mutation in the code block below into the playground editor to insert a new user record.


      mutation createUser {
          input: {
            email: "[email protected]"
            fullName: "John Doe"
        ) {

      The output in the right pane will look similar to this:

      A create user mutation on the GraphQL Playround

      You executed the CreateUser mutation to create a test user with the name of John Doe, and the id of the newly inserted user record was returned as a result of the mutation.

      Note: Copy the id value returned from the executed GraphQL query. You will use the id when uploading a profile image for the test user created above.

      At this point, you have the second UploadProfileImage mutation resolver function left to implement. But before you implement this function, you need to implement the query resolver first. This is because each upload is linked to a specific user, which is why you retrieved the ID of a specific user before uploading an image.

      Implementing the Query Resolver Method

      As defined in the schema.resolvers.graphqls file, one query resolver was generated to retrieve all created users. Similar to the previous mutation resolvers methods, you also need to implement the query resolver method.

      Open scheme.resolvers.go and modify the generated Users query resolver with the highlighted lines. The new code within the Users method below will query the Postgres database for all user rows and return the result.


      package graph
      func (r *queryResolver) Users(ctx context.Context) ([]*model.User, error) {
        var users []*model.User
        err := r.DB.Model(&users).Select()
          if err != nil {
           return nil, err
        return users, nil 

      Within the Users resolver function above, fetching all records within the user table is made possible by using go-pg’s select method on the User model without passing the WHERE or LIMIT clause into the query.

      Note: For a bigger application where many records will be returned from the query, it is important to consider paginating the data returned for improved performance.

      To test this query resolver from your browser, navigate to http://localhost:8080 to access the GraphQL playground. Paste the GraphQL Query below into the playground editor to fetch all created user records.


      query fetchUsers {
        users {

      The output in the right pane will look similar to this:

      Query result GraphQL playground

      In the returned results, you can see that a users object with an array value was returned. For now, only the previously created user was returned in the users array because that it is the only record in the table. More users will be returned in the users array if you execute the createUser mutation with new details. You can also observe that the img_uri field in the returned data has the hardcoded fallback image URL.

      At this point, you have now implemented both the CreateUser mutation and the User query. Everything is in place for you to receive images from the second UploadProfileImage resolver and upload the received image to a bucket with DigitalOcean Spaces using an S3 compatible AWS-GO SDK.

      Step 6 — Uploading Images to DigitalOcean Spaces

      In this step, you will use the powerful API within the second UploadProfileImage mutation to upload an image to your Space.

      To begin, navigate to the Spaces section of your DigitalOcean console, where you will create a new bucket for storing the uploaded files from your backend application.

      Click the Create New Space button. Leave the settings at their default values and specify a unique name for the new Space:

      Digitalocean spaces

      After a new Space has been created, navigate to the settings tab and copy the Space’s endpoint, name, and region. Add these to the .env file within the GraphQL project in this format:



      As an example, the following screenshot shows the Setting tab, and highlights the name, region, and endpoint details of the demo space (Victory-space):

      Victory-space endpoint, name, and region

      As part of the prerequisites, you created a Space Access key and Secret key for your Space. Paste in your Access and Secret keys into the .env file within the GraphQL application in the following format:



      At this point, you will need to use the CTRL + C key combination to stop the GraphQL server, and execute the command below to restart the GraphQL application with the new credentials loaded into the application.

      Now that your Space credentials are loaded into the application, you will create the upload logic in the UploadProfileImage mutation resolver. The first step will be to add and configure the aws-sdk-go SDK to connect to your DigitalOcean Space.

      One way to programmatically perform operations on your bucket within Spaces is through the use of compatible AWS SDKs. The AWS Go SDK is a development kit that provides a set of libraries to be used by Go developers. The libraries provided by the SDK can be used by a Go written application when performing operations with AWS resources such as file transfers to S3 buckets.

      The DigitalOcean Spaces documentation provides a list of operations you can perform on the Spaces API using an AWS SDK. We will use the aws-sdk-go SDK to connect to the your DigitalOcean Space.

      Execute the go get command to install the aws-sdk-go SDK into the application:

      • go get

      Over the next few code blocks, you will gradually put together the upload logic in the UploadProfileImage mutation resolver.

      First, open the schema.resolvers.go file. Add the highlighted lines to configure the AWS SDK with the stored credentials and establish a connection with your DigitalOcean Space:

      Note: The code within the code block below is incomplete, as you are gradually putting the upload logic together. You will complete the code in the subsequent code blocks.


      package graph
      import (
      func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {
       SpaceRegion := os.Getenv("DO_SPACE_REGION")
       accessKey := os.Getenv("ACCESS_KEY")
       secretKey := os.Getenv("SECRET_KEY")
       s3Config := &aws.Config{
           Credentials: credentials.NewStaticCredentials(accessKey, secretKey, ""),
           Endpoint:    aws.String(os.Getenv("SPACE_ENDPOINT")),
           Region:      aws.String(SpaceRegion),
       newSession := session.New(s3Config)
       s3Client := s3.New(newSession)

      Now that the SDK is configured, the next step is to upload the file sent in the multipart HTTP request.

      One way to handle files sent is to read the content from the multipart request, temporarily save the content to a new file in memory, upload the temporary file using the aws-SDK-go library, and then delete it after an upload. Using this approach, a client application such as a web application consuming this GraphQL API still uses the same GraphQL endpoint to perform file uploads, rather than using a third party API to upload files.

      To achieve this, add the highlighted lines to the existing code within the UploadProfileImage mutation resolver in the schema.resolvers.go file:


      package graph
      import (
      func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {
      SpaceName := os.Getenv("DO_SPACE_NAME")
        userFileName := fmt.Sprintf("%v-%v", input.UserID, input.File.Filename)
        stream, readErr := ioutil.ReadAll(input.File.File)
       if readErr != nil {
           fmt.Printf("error from file %v", readErr)
       fileErr := ioutil.WriteFile(userFileName, stream, 0644); if fileErr != nil {
           fmt.Printf("file err %v", fileErr)
       file, openErr := os.Open(userFileName); if openErr != nil {
           fmt.Printf("Error opening file: %v", openErr)
       defer file.Close()
       buffer := make([]byte, input.File.Size)
      _, _ = file.Read(buffer)
       fileBytes := bytes.NewReader(buffer)
       object := s3.PutObjectInput{
           Bucket: aws.String(SpaceName),
           Key:    aws.String(userFileName),
           Body:   fileBytes,
           ACL:    aws.String("public-read"),
       if _, uploadErr := s3Client.PutObject(&object); uploadErr != nil {
           return false, fmt.Errorf("error uploading file: %v", uploadErr)
       _ = os.Remove(userFileName)
      return true, nil

      Using the ReadAll method from the io package in the code block above, you first read the content of the file added to the multipart request sent to the GraphQL API, and then a temporary file is created to dump this content into.

      Next, using the PutObjectInput struct, you created the structure of the file to be uploaded by specifying the Bucket, Key, ACL, and Body field to be the content of the temporarily stored file.

      Note: The Access Control List (ACL) field in the PutObjectInput struct has a public-read value to make all uploaded files available for viewing over the internet. You can remove this field if your application requires that uploaded data be kept private.

      After creating the PutObjectInput struct, the PutObject method is used to make a PUT operation, sending the values of the PutObjectInput struct to the bucket. If there is an error, a false boolean value and an error message are returned, ending the execution of the resolver function and the mutation in general.

      To test the upload mutation resolver, you can use an image of Sammy the Shark, DigitalOcean’s mascot. Use the wget command to download an image of Sammy:

      • wget

      Next, execute the cURL command below to make an HTTP request to the GraphQL API to upload Sammy’s image, which has been added to the request form body.

      Note: If you are on a Windows Operating System, it is recommended that you execute the cURL commands using the Git Bash shell due to the backslash escapes.

      • curl localhost:8080/query -F operations="{ "query": "mutation uploadProfileImage($image: Upload! $userId : String!) { uploadProfileImage(input: { file: $image userId : $userId}) }", "variables": { "image": null, "userId" : "12345" } }" -F map='{ "0": ["variables.image"] }' -F [email protected]

      Note: We are using a random userId value in the request above because the process of updating a user’s record has not yet been implemented.

      The output will look similar to this, indicating that the file upload was successful:


      {"data": { "uploadProfileImage": true }}

      In the Spaces section of the DigitalOcean console, you will find the image uploaded from your terminal:

      A bucket within Digitalocean showing a list of uploaded files

      At this point, file uploads within the application are working; however, the files are linked to the user who performed the upload. The goal of each file upload is to have the file uploaded into a storage bucket and then linked back to a user by updating the img_uri field of the user.

      Open the resolver.go file in the graph directory and add the code block below. It contains two methods: one to retrieve a user from the database by a specified field, and the other function to update the record of a user.


      import (
      func (r *mutationResolver) GetUserByField(field, value string) (*model.User, error) {
          user := model.User{}
          err := r.DB.Model(&user).Where(fmt.Sprintf("%v = ?", field), value).First()
          return &user, err
      func (r *mutationResolver) UpdateUser(user *model.User) (*model.User, error) {
          _, err := r.DB.Model(user).Where("id = ?", user.ID).Update()
          return user, err

      The first GetUserByField function above accepts a field and value argument, both of a string type. Using go-pg’s ORM, it executes a query on the database, fetching data from the user table with a WHERE clause.

      The second UpdateUser function in the code block uses go-pg to execute an UPDATE statement to update a record in the user table. Using the where method, a WHERE clause with a condition is added to the UPDATE statement to update only the row having the same ID passed into the function.

      Now you can use the two methods in the UploadProfileImage mutation. Add the content of the highlighted code block below to the UploadProfileImage mutation within the schema.resolvers.go file. This will retrieve a specific row from the user table and update the img_uri field in the user’s record after the file has been uploaded.

      Note: Place the highlighted code at the line above the existing return statement within the UploadProfileImage mutation.


      package graph
      func (r *mutationResolver) UploadProfileImage(ctx context.Context, input model.ProfileImage) (bool, error) {
        _ = os.Remove(userFileName)
          user, userErr := r.GetUserByField("ID", *input.UserID)
           if userErr != nil {
               return false, fmt.Errorf("error getting user: %v", userErr)
         fileUrl := fmt.Sprintf("", SpaceName, SpaceRegion, userFileName)
           user.ImgURI = fileUrl
           if _, err := r.UpdateUser(user); err != nil {
               return false, fmt.Errorf("err updating user: %v", err)
        return true, nil

      From the new code added to the schema.resolvers.go file, an ID string and the user’s ID are passed to the GetUserByField helper function to retrieve the record of the user executing the mutation.

      A new variable is then created and given the value of a string formatted to have the link of the recently uploaded file in the format of The ImgURI field in the retrieved user model was reassigned the value of the formatted string as a link to the uploaded file.

      Paste the curl command below into your terminal, and replace the highlighted USER_ID placeholder in the command with the userId of the user created through the GraphQL playground in a previous step. Make sure the userId is wrapped in quotation marks so that the terminal can encode the value properly.

      • curl localhost:8080/query -F operations="{ "query": "mutation uploadProfileImage($image: Upload! $userId : String!) { uploadProfileImage(input: { file: $image userId : $userId}) }", "variables": { "image": null, "userId" : "USER_ID" } }" -F map='{ "0": ["variables.image"] }' -F [email protected]

      The output will look similar to this:


      {"data": { "uploadProfileImage": true }}

      To further confirm that the user’s img_uri was updated, you can use the fetchUsers query from the GraphQL playground in the browser to retrieve the user’s details. If the update was successful, you will see that the default placeholder URL of in the img_uri field has been updated to the value of the uploaded image.

      The output in the right pane will look similar to this:

      A query mutation to retrieve an updated user record using the GraphQL Playground

      In the returned results, the img_uri in the first user object returned from the query has a value that corresponds to a file upload to a bucket within DigitalOcean Spaces. The link in the img_uri field is made up of the bucket endpoint, the user’s ID, and lastly, the filename.

      To test the permission of the uploaded file set through the ACL option, you can open the img_uri link in your browser. Due to the default Metadata on the uploaded image, it will automatically download to your computer as an image file. You can open the file to view the image.

      Downloaded view of the uploaded file

      The image at the img_uri link will be the same image that was uploaded from the command line, indicating that the methods in the resolver.go file were executed correctly, and the entire file upload logic in the UploadProfileImage mutation works as expected.

      In this step, you uploaded an image into a DigitalOcean Space by using the AWS SDK for Go from the UploadProfileImage mutation resolver.


      In this tutorial, you performed a file upload to a created bucket on a DigitalOcean Space using the AWS SDK for Golang from a mutation resolver in a GraphQL application.

      As a next step, you could deploy the application built within this tutorial. The Go Dev Guide provides a beginner-friendly guide on how to deploy a Golang application to DigitalOcean’s App Platform, which is a fully managed solution for building, deploying, and managing your applications from various programming languages.

      Source link

      How To Use ActiveStorage in Rails 6 with DigitalOcean Spaces

      The author selected the Diversity in Tech fund to receive a donation as part of the Write for DOnations program.


      When you’re building web applications that let users upload and store files, you’ll want to use a scalable file storage solution. This way you’re not in danger of running out of space if your application gets wildly popular. After all, these uploads can be anything from profile pictures to house photos to PDF reports. You also want your file storage solution to be reliable so you don’t lose your important customer files, and fast so your visitors aren’t waiting for files to transfer. ou’ll want this all to be affordable too.

      DigitalOcean Spaces can address all of these needs. Because it’s compatible with Amazon’s S3 service, you can quickly integrate it into a Ruby on Rails application using the new ActiveStorage library that ships with Rails 6.

      In this guide, you’ll configure a Rails application, so it uses ActiveStorage with DigitalOcean Spaces. You’ll then run through the configuration necessary to get uploads and downloads blazing fast using direct uploads and Spaces’ built-in CDN (Content Delivery Network).

      When you’re finished, you’ll be ready to integrate file storage with DigitalOcean spaces into your own Rails application.


      Before you begin this guide, you’ll need the following:

      Step 1 — Getting the Sample App Running

      Rather than build a complete Rails application from scratch, you’ll clone an existing Rails 6 application that uses ActiveStorage and modify it to use DigitalOcean Spaces as its image storage backend. The app you’ll work with is Space Puppies, an image gallery that will let people upload and view photographs of their favorite puppies. The application looks like the following figure:

      The Space Puppies application running in a web browser

      Open your terminal and clone the application from GitHub with the following command:

      • git clone

      You’ll see output that looks similar to this:


      Cloning into 'space-puppies'... remote: Enumerating objects: 122, done. remote: Counting objects: 100% (122/122), done. remote: Compressing objects: 100% (103/103), done. remote: Total 122 (delta 3), reused 122 (delta 3), pack-reused 0 Receiving objects: 100% (122/122), 163.17 KiB | 1018.00 KiB/s, done. Resolving deltas: 100% (3/3), done.

      Next, check your Ruby version. Space Puppies uses Ruby 2.7.1, so run rbenv versions to check which version you have installed:

      If you’ve followed the prerequisite tutorials, you’ll only have Ruby 2.5.1 in that list, and your output will look like this:


      * system 2.5.1

      If you don’t have Ruby 2.7.1 in that list, install it using ruby-build:

      Depending on your machine’s speed and operating system, this might take a while. You’ll see output that looks like this:


      Downloading ruby-2.7.1.tar.bz2... -> Installing ruby-2.7.1... Installed ruby-2.7.1 to /root/.rbenv/versions/2.7.1

      Change to the space-puppies directory:

      rbenv will automatically change your Ruby version when you enter the directory. Verify the version:

      You’ll see output similar to the following:


      ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]

      Next, you will install the Ruby gems and JavaScript packages that the app needs to run. Then you’ll the database migrations needed for the Space Puppies app to run.

      Install all the necessary gems using the bundle command:

      Then, to tell rbenv about any new binaries installed by Bundler, use the rehash command:

      Next, tell yarn to install the necessary JavaScript dependencies:

      Now create the database schema with Rails’ built-in migration tool:

      With all the libraries installed and the database created, start the built-in web server with the following command:

      Note: By default, rails s only binds to the local loopback address, meaning you can only access the server from the same computer that runs the command. If you’re running on a Droplet and you’d like to access your server from a browser running on your local machine, you’ll need to tell the Rails server to respond to remote requests by binding to You can do that with this command:

      Your server starts, and you’ll receive output like this:


      => Booting Puma => Rails application starting in development => Run `rails server --help` for more startup options Puma starting in single mode... * Version 4.3.5 (ruby 2.7.1-p83), codename: Mysterious Traveller * Min threads: 5, max threads: 5 * Environment: development * Listening on tcp:// * Listening on tcp://[::1]:3000 Use Ctrl-C to stop

      Now you can access your application in a web browser. If you’re running the application on your local machine, navigate to http://localhost:3000. If you’re running on a Droplet or other remote server, then navigate to http://your_server_ip:3000.

      You’ll see the app’s interface, only this time without any puppies. Try adding a couple of images by clicking the New Puppy button.

      The Space Puppies application running in a web browser

      If you need puppy photos to use for testing, Unsplash has an extensive list you can use for testing. Review the Unsplash license if you plan to use these images in your projects.

      Before moving on, let’s walk through each layer of the application and look at how ActiveStorage works with each part so you can make the necessary changes for DigitalOcean Spaces. For a more detailed look at ActiveStorage, read the Active Storage Overview page in the official Rails documentation.

      First, look at the model, which represents an object in your application that you’re storing in the database. You’ll find the Puppy model in app/models/puppy.rb. Open this file in your text editor and you’ll see this code:


      class Puppy < ApplicationRecord
        has_one_attached :photo

      You’ll find the has_one_attached macro in the model, which indicates there’s a photo attached to each Puppy model instance. These photos will be stored as ActiveStorage::Blob instances via an ActiveStorage::Attached::One proxy.

      Close this file.

      The next layer up the stack is the controller. In a Rails application, the controller is responsible for controlling access to database models and responding to requests from the user. The corresponding controller for the Puppy model is the PuppiesController which you will find in app/controllers/puppies_controller.rb. Open this file in your editor and you’ll see the following code:


      class PuppiesController < ApplicationController
        def index
          @puppies = Puppy.with_attached_photo
        # ... snipped other actions ...

      Everything in the file is standard Rails code, apart from the with_attached_photo call. This call causes ActiveRecord to load all of the associated ActiveStorage::Blob associations when you fetch the list of Puppy models. This is a scope that ActiveStorage provides to help you avoid an expensive N+1 database query.

      Finally, let’s look at the views, which generate the HTML your application will send to the user’s browser. There are a few views in this app, but you’ll want to focus on the view responsible for showing the uploaded puppy image. You’ll find this file at app/views/puppies/_puppy.html.erb. Open it in your editor, and you’ll see code like this:


      <div class="puppy">
        <%= image_tag [250, 250]) %>

      ActiveStorage is designed to work with Rails, so you can use the built-in image_tag helper to generate a URL that points to an attached photo, wherever it happens to be stored. In this case, the app is using the variant support for images. When the user first requests this variant, ActiveStorage will automatically use ImageMagick via the image_processing gem, to generate a modified image fitting our requirements. In this case, it will create a puppy photo filling a 250x250 pixel box. The variant will be stored for you in the same place as your original photo, which means you’ll only need to generate each variant once. Rails will serve the generated version on subsequent requests.

      Note: Generating image variants can be slow, and you potentially don’t want your users waiting. If you know you’re going to need a particular variant, you can eagerly generate it using the .processed method: [250, 250]).processed

      It’s a good idea to do this kind of processing in a background job when you deploy to production. Explore Active Job and create a task to call processed to generate your images ahead of time.

      Now your application is running locally, and you know how all the code pieces fit together. Next, it’s time to set up a new DigitalOcean Space so you can move your uploads to the cloud.

      Step 2 — Setting up your DigitalOcean Space

      At the moment, your Space Puppies application stores images locally, which is fine for development or testing, but you almost certainly don’t want to use this mode in production. In order to scale the application horizontally by adding more application server instances, you’d need copies of each image on every server.

      In this step, you’ll create a DigitalOcean Space to use for your app’s images.

      Sign in to your DigitalOcean management console, click Create in the top right, and choose Spaces.

      Pick any data center and leave the CDN disabled for now; you’ll come back to this later. Ensure the file listing is set to Restrict File Listing.

      Choose a name for your Space. Remember that this will have to be unique across all Spaces users, so pick a unique name, like yourname-space-puppies. Click Create a Space:

      A screenshot of the DigitalOcean create space form with a name filled  in

      Warning: Be careful about access to the files you store on behalf of your customers. There have been many examples of data leaks and hacks due to misconfigured file storage. By default, ActiveStorage files are only accessible if you generate an authenticated URL, but it’s worth being vigilant if you’re dealing with customer data.

      You’ll then see your brand new Space.

      Click the Settings tab and take a note of your Space’s endpoint. You’ll need that when you configure your Rails application.

      Next, you’ll configure the Rails application to store ActiveStorage files in this Space. To do that securely, you need to create a new Spaces Access Key and Secret.

      Click API in the left navigation, then click Generate New Key in the bottom right. Give your new key a descriptive name like “Development Machine”. Your secret will only appear once, so be sure to copy it somewhere safe for a moment.

      A screenshot showing a Spaces access key

      In your Rails app, you’ll need a secure way to store that access token, so you’ll use Rails’ secure credential management feature. To edit your credentials, execute the following command in your terminal:

      • EDITOR="nano -w" rails credentials:edit

      This generates a master key and launches the nano editor so you can edit the values.

      In nano, add the following to your credentials.yml file, using your API key and secret from DigitalOcean:


        access_key: YOUR_API_ACCESS_KEY
        secret: YOUR_API_ACCESS_SCRET

      Save and close the file (Ctrl+X, then Y, then Enter), and Rails will store an encrypted version that’s safe to commit to source control in config/credentials.yml.enc.

      You will see output like the following:


      Adding config/master.key to store the encryption key: RANDOM_HASH_HERE Save this in a password manager your team can access. If you lose the key, no one, including you, can access anything encrypted with it. create config/master.key File encrypted and saved.

      Now that you’ve configured your credentials, you’re ready to point your app to your new Spaces bucket.

      Open the file config/storage.yml in your editor and add the following definition to the bottom of that file:


        service: S3
        endpoint: https://your-spaces-endpoint-here
        access_key_id: <%= Rails.application.credentials.dig(:digitalocean, :access_key) %>
        secret_access_key: <%= Rails.application.credentials.dig(:digitalocean, :secret) %>
        bucket: your-space-name-here
        region: unused

      Note that the service says S3 rather than Spaces. Spaces has an S3-compatible API, and Rails supports S3 natively. Your endpoint is https:// followed by your Space’s endpoint, which you copied previously, and the bucket name is the name of your Space, which you entered when creating it. The bucket name is also displayed as the title in your Control Panel when you view your Space.

      This configuration file will be stored unencrypted, so instead of entering your access key and secret, you’re referencing the ones you just entered securely in credentials.yml.enc.

      Note: DigitalOcean uses the endpoint to specify the region. However, you need to provide the region, or ActiveStorage will complain. Since DigitalOcean will ignore it, you can set it to whatever value you’d like. The value unused in the example code makes it clear that you’re not using it.

      Save the configuration file.

      Now, you need to tell Rails to use Spaces for your file storage backend instead of the local file system. Open config/environments/development.rb in your editor and change the config.active_storage.service entry from :local: to :digitalocean:


        # ...
        # Store uploaded files on the local file system (see config/storage.yml for options).
        config.active_storage.service = :digitalocean
        # ... 

      Save the file and exit your editor. Now start your server again:

      Visit http://localhost:3000 or http://your server ip:3000 in a browser once again.

      Upload some images, and the app will store them in your DigitalOcean Space. You can see this by visiting your Space in the DigitalOcean console. You will see the uploaded files and variants listed:

      files uploaded to a Space

      ActiveStorage uses random filenames by default, which is helpful when protecting uploaded customer data. Metadata, including the original filename, is stored in your database instead.

      Note: If you are getting an Aws::S3::Errors::SignatureDoesNotMatch, that might mean your credentials are incorrect. Run rails credentials:edit again and double-check them.

      Rails stores the names and some metadata about your files as ActiveStorage::Blob records. You can access the ActiveStorage::Blob for any of your records by calling an accessor method named after your attachment. In this case, the attachment is called photo.

      Try it out. Start a Rails console in your terminal:

      Grab the blob from the last puppy photo you uploaded:

      #=> => #<ActiveStorage::Blob ...>

      You now have a Rails Application storing uploads in a scalable, reliable, and affordable object store.

      In the next two steps, you’ll explore two optional additions you can make to the app that will help improve this solution’s performance and speed for your users.

      Step 3 — Configuring the Spaces CDN (Optional)

      Note: For this step, you will need a doman with name servers pointing to DigitalOcean. You can follow the How to Add Domains guide to do that.

      Using a Content Delivery Network (CDN) will allow you to provide faster downloads of files for your users by locating copies of the files closer to them.

      You can investigate CDN performance using a tool like Uptrends CDN Performance Check. If you add the URL for one of the photos you uploaded in the previous step, you’ll see things are fast if you happen to be nearby, but things get a little slower as you move away geographically. You can get the URL using the Developer Tools in your browser, or by starting a Rails console (rails c) and calling service_url on an attachment.


      Here’s an example Uptrends report with a file located in the San Francisco data center. Notice that the times decrease depending on the distance from San Francisco. San Diego has a short time, while Paris has a much longer time:

      An example Uptrends CDN Performance Report

      You can improve speeds by enabling Spaces’ built-in CDN. Go to Spaces in your DigitalOcean Control Panel and click the name of the Space you created in Step 2. Next, choose the Settings tab and click Edit next to CDN (Content Delivery Network), then click Enable CDN.

      Now you need to choose a domain to use for your CDN and create an SSL Certificate for the domain. You can do this automatically using Let’s Encrypt. Click the Use a custom subdomain dropdown and then Add a new subdomain certificate.

      Find the domain you’d like to use, then choose the option to create a subdomain. Something like is a standard naming convention. You can then give the certificate a name and click the “Generate Certificate and Use Subdomain” button.

      The filled-in Add Custom Subdomain form

      Press the Save button under CDN (Content Delivery Network).

      Your CDN is now enabled, but you need to tell your Rails Application to use it. This isn’t built into ActiveStorage in this version of Rails, so you’ll override some built-in Rails framework methods to make it work.

      Create a new Rails initializer called config/initializers/active_storage_cdn.rb and add the following code which will rewrite the URLs:


      Rails.application.config.after_initialize do
        require "active_storage/service/s3_service"
        module SimpleCDNUrlReplacement
          CDN_HOST = ""
          def url(...)
            url = super
            original_host = "#{}.#{}"      
            url.gsub(original_host, CDN_HOST)

      This initializer runs each time your application asks for a URL from an ActiveStorage::Service::S3Service provider. It then replaces the original, non-CDN host with your CDN host, defined as the CDN_HOST constant.

      You can now restart your server, and you’ll notice that each of your photos comes from the CDN. You won’t need to re-upload them, as DigitalOcean will take care of forwarding the content from the data center where you set up your Space out to the edge nodes.

      You might like to compare the speed of accessing one of your photos on Uptrends’ Performance Check site now to the pre-CDN speed. Here’s an example of using the CDN on a San Francisco-based Space. You can see a significant global speed improvement.

      The Uptrends CDN Performance Report after enabling the CDN

      Next you’ll configure the application to receive files directly from the browser.

      Step 4 — Setting up Direct Uploads (Optional)

      One last feature of ActiveStorage that you might like to consider is called a Direct Upload. Now, when your users upload a file, the data is sent to your server, processed by Rails, then forwarded to your Space. This can cause problems if you have many simultaneous users, or if your users are uploading large files, as each file will (in most cases) use a single app server thread for the entire duration of an upload.

      By contrast, a Direct Upload will go straight to your DigitalOcean Space with no Rails server hop in between. To do this, you’ll enable some built-in JavaScript that ships with Rails and configure Cross-Origin Resource Sharing([CORS](( on your Space so that you can securely send requests directly to the Space despite them originating in a different place.

      First, you’ll configure CORS for your Space. You will use s3cmd to do this, and you can follow Setting Up s3cmd 2.x with DigitalOcean Spaces if you haven’t configured this to work with Spaces yet.

      Create a new file called cors.xml and add the following code to the file, replacing your_domain with the domain you’re using for development. If you are developing on your local machine, you’ll use http://localhost:3000. If you’re developing on a Droplet, this will be your Droplet IP address:



      You can then use s3cmd to set this as the CORS configuration for your Space:

      • s3cmd setcors cors.xml s3://your-space-name-here

      There’s no output when this command runs successfully, but you can check that it worked by looking at your Space in the DigitalOcean Control Panel. Choose Spaces, then select the name of your Space, then select the Settings tab. You’ll see your configuration under the CORS Configurations heading:

      A successful CORS configuration for direct uploads

      Note: At the moment you need to use s3cmd rather than the Control Panel to configure CORS for “localhost” domains because the Control Panel treats these as invalid domains. If you’re using a non-localhost domain (like a Droplet IP) it’s safe to do it here.

      Now you need to tell Rails to use direct uploads, which you do by passing the direct_upload option to the file_field helper. Open app/views/puppies/new.html.erb in your editor and modify the file_field helper:


      <h2>New Puppy</h2>
      <%= form_with(model: @puppy) do |f| %>
        <div class="form-item">
          <%= f.label :photo %>
          <%= f.file_field :photo, accept: "image/*", direct_upload: true %>
        <div class="form-item">
          <%= f.submit "Create puppy", class: "btn", data: { disable_with: "Creating..." } %>
      <% end %>

      Save the file and start your server again:

      When you upload a new photo, your photo is uploaded directly to DigitalOcean Spaces. You can verify this by looking at the PUT request that’s made when you click the Create puppy button. You can find the requests by looking in your browser’s web console, or by reading the Rails server logs. You’ll notice that the image upload is significantly faster, especially for larger images.


      In this article you modified a basic Rails application using ActiveStorage to store files that are secure, fast, and scalable on DigitalOcean Spaces. You configured a CDN for fast downloads no matter where your users are located, and you implemented direct uploads so that your app servers will not be overwhelmed.

      You can now take this code and configuration and adapt it to fit your own Rails application.

      Source link

      How To Store WordPress Assets on DigitalOcean Spaces With Ubuntu 20.04


      DigitalOcean Spaces is an object storage service that can be used to store large amounts of diverse, unstructured data. WordPress sites, which often include image and video assets, can be good candidates for object storage solutions. Using object storage for these types of static resources can optimize site performance by freeing up space and resources on your servers. For more information about object storage and WordPress check out our tutorial on How To Back Up a WordPress Site to Spaces.

      In this tutorial, you’ll learn how to use a WordPress plugin that works directly with DigitalOcean Spaces as the primary asset store. The DigitalOcean Spaces Sync plugin routes the data of our WordPress media library to Spaces and provides you with various configuration options based on your needs, streamlining the process of using object storage with your WordPress instance.


      This tutorial assumes that you have a WordPress instance on a server you’ll use as a DigitalOcean Space. If you do not have this set up, you can complete the following:

      With these prerequisites in place, you’re ready to begin using this plugin.

      Modifying WordPress Permissions

      Throughout this tutorial, you will be working with the wp-content/uploads folder in your WordPress project, so it is important that this folder exists and has the correct permissions. You can create it with the mkdir command using the -p flag in order to create the folder if it doesn’t exist, and avoid throwing an error if it does:

      • sudo mkdir -p /var/www/html/wp-content/uploads

      You can now set permissions on the folder. First, set the ownership to your user (you will use sammy here, but be sure to use your non-root sudo user), and group ownership to the www-data group:

      • sudo chown -R sammy:www-data /var/www/html/wp-content/uploads

      Next, establish the permissions that will give the web server write access to this folder:

      • sudo chmod -R g+w /var/www/html/wp-content/uploads

      You will now be able to use our plugins to create a store in object storage for the assets in the wp-content/uploads folder, and to engage with your assets from the WordPress interface.

      Installing DigitalOcean Spaces Sync

      The first step in using DigitalOcean Spaces Sync will be to install it in your WordPress folder. You can navigate to the plugin folder within our WordPress directory:

      • cd /var/www/html/wp-content/plugins

      From here, install DigitalOcean Spaces Sync using the wp command:

      • wp plugin install do-spaces-sync

      To activate the plugin, you can run:

      • wp plugin activate do-spaces-sync

      From here, navigate to the plugins tab on the left-hand side of our WordPress administrative dashboard:

      WordPress Plugin Tab

      You should see DigitalOcean Spaces Sync in your list of activated plugins:

      Spaces Sync Plugin Screen

      To manage the settings for DigitalOcean Spaces Sync, navigate to our Settings tab, and select DigitalOcean Spaces Sync from the menu:

      Settings Tab

      DigitalOcean Spaces Sync will now give you options to configure your asset storage:

      DO Spaces Sync Configuration

      The Connection Settings field in the top half of the screen asks for your Spaces Access Key and Secret. It will then ask for your Container, which will be the name of your Space, and the Endpoint.

      You can determine the endpoint of your Space based on its URL. For example, if the URL of your Space is, then example-name will be your bucket/container, and will be your endpoint.

      In the plugin’s interface, the Endpoint section will be pre-filled with the default You should modify this endpoint if your Space lives in another region.

      Next, you will be asked for File & Path Settings. In the field marked Full URL-path to files, you can input either a storage public domain, if your files will be stored only on your Space, or a full URL path, if you will store them on your Space and server.

      For example, if your WordPress project is located in /var/www/html, and you want to store files on both your server and Space, then you would enter:

      • http://your_server_ip/wp-content/uploads in the Full URL-path to files field
      • /var/www/html/wp-content/uploads in the Local path field

      The Storage prefix and Filemask settings are prefilled, and do not need to be modified unless you would like to specify certain types of files for your sync.

      You will cover the specifics of storing files on your server and Space and on your Space alone in the following sections.

      Syncing and Saving Files in Multiple Locations

      DigitalOcean Spaces Sync offers the option of saving files to your server while also syncing them to your Space. This utility can be helpful if you need to keep files on your server, but would also like backups stored elsewhere. For this tutorial, you will go through the process of syncing a file to your Space while keeping it on your server. For the purposes of this example, you will assume that you have a file called sammy10x10.png that you would like to store in your media library and on your Space.

      First, navigate to the Settings tab on your WordPress administrative dashboard, and select DigitalOcean Spaces Sync from the menu of presented options.

      Next, in the Connections Settings field, enter your Spaces Key and Secret, followed by your Container and Endpoint. Remember, if the URL of your Space is, then example-name will be your Container, and will be your Endpoint. Test your connections by clicking the Check the Connection button at the bottom of the Connection Settings field:

      Check Connection Button

      Now you are ready to fill out the File & Path Settings.

      In the Full URL-path to files field you can enter our full URL path, since you are saving your file on your server and on your Space. You’ll use your server’s IP here, but if you have a domain, you can swap out the IP address for your domain name. For more about registering domains with DigitalOcean, see our tutorial on How To Set Up a Host Name with DigitalOcean. In this case, the Full URL-path to files will be http://your_server_ip/wp-content/uploads.

      Next, you will fill out the Local path field with the local path to the uploads directory: /var/www/html/wp-content/uploads.

      Because you are working with a single file, you do not need to modify the Storage prefix and Filemask sections. As your WordPress media library grows in size and variety, you can modify this setting to target individual file types using wildcards and extensions such as *.png in the Filemask field.

      Your final File & Path Settings will look like this:

      Sync Server and Cloud

      Be sure to save your configuration changes by clicking the Save Changes button at the bottom of the screen.

      Now you can add the file, sammy10x10.png, to our WordPress media library. You’ll use the wp media import command, which will import the file from your home directory to your WordPress media library. In this case, your home directory will belong to sammy, but in your case this will be your non-root sudo user. As you move the file, you will use the --path parameter to specify the location of our WordPress project:

      • wp media import --path=/var/www/html/ /home/sammy/sammy10x10.png

      Looking at the WordPress interface, you should now see the file in your Media Library. You can navigate there by following the Media Library tab on the left side of our WordPress administrative dashboard:

      Media Library Tab

      If you navigate to your Spaces page in the DigitalOcean control panel, you should also see the file in your Space.

      Finally, you can navigate to our wp-content/uploads folder, where WordPress will have created a sub-folder with the year and month. Within this folder you should see our sammy10x10.png file.

      Storing Files on Spaces

      The DigitalOcean Spaces Sync plugin has an additional option that will allow you to store files only on your Space, in case you would like to optimize space and resources on our server. You will work with another file, sammy-heart10x10.png, and set your DigitalOcean Spaces Sync settings so that this file will be stored only on your Space.

      First, let’s navigate back to the plugin’s main configuration page:

      DO Spaces Sync Configuration

      You can leave the Connection Settings information, but will modify the File & Path Settings. First, in the Full URL-path to files, you will write the storage public domain. Again, you will use your server IP, but you can swap this out for a domain if you have one: http://uploads.your_server_ip

      Next, navigate to Sync Settings, at the bottom of the page, and click the first box, which will allow you to “store files only in the cloud and delete after successful upload.” Your final File & Path Settings will look like this:

      Sync Cloud Only

      Be sure to save your changes by clicking the Save Changes button at the bottom of the screen.

      Back on the command line, move sammy-heart10x10.png from your user’s home directory to your Media Library using wp media import:

      • wp media import --path=/var/www/html/ /home/sammy/sammy-heart10x10.png

      If you navigate back to your WordPress interface, you will not see sammy-heart10x10.png or sammy10x10.png in your Media Library. Next, return to the command line and navigate to your wp-content/uploads directory — you should see that sammy-heart10x10.png is missing from your timestamped sub-folder.

      Finally, if you navigate to the Spaces page in the DigitalOcean control panel, you should see both files stored in your Space.


      This tutorial covered two different options you can use to store your WordPress media files to DigitalOcean Spaces using DigitalOcean Spaces Sync. This plugin offers additional options for customization, which you can learn more about by reading the developer’s article “Sync your WordPress media with DigitalOcean Spaces.”

      If you would like more general information about working with Spaces, check out our introduction to DigitalOcean Spaces and our guide to best practices for performance on Spaces.

      Source link