How to use AWS S3 pre-signed URLs to upload and download files
Sohail SJ | TheZenLabs

Sohail SJ | TheZenLabs @thesohailjafri

About: Developer | Biker | Athlete | Youtuber 70k Views on Dev.to

Location:
Mumbai | India
Joined:
Apr 5, 2023

How to use AWS S3 pre-signed URLs to upload and download files

Publish Date: Mar 18 '24
63 15

So, this month we landed a new client for whom we must keep track of distributor's and doctor's orders and sales. The client had a requirement to keep the S3 bucket private and only allow access to the files using pre-signed URLs. So in this blog, I will show you how to use pre-signed URLs to upload and download files from an AWS S3 bucket while keeping the bucket private.

Table of Contents

Prerequisites

  • Basic knowledge of JavaScript
  • Basic knowledge of AWS S3 bucket
  • Basic knowledge of HTTP requests
  • Basic knowledge of Node.js and Express.js

Let's break down the task into smaller steps.

  1. Setting up the backend
  2. Develop a function to generate an AWS S3 pre-signed URL
  3. Configuring AWS S3 bucket
  4. Connecting function to an API endpoint
  5. Setting up the frontend
  6. Connecting frontend to the API

Step 1: Setting up the backend

mkdir backend
cd backend
npm init -y
npm install express aws-sdk
touch index.js
Enter fullscreen mode Exit fullscreen mode

You can use type nul > index.js for Windows users to create a new file.

// index.js
const express = require('express')
const app = express()
const AWS = require('aws-sdk')

app.listen(3000, () => {
  console.log('Server is running on port 3000')
})
Enter fullscreen mode Exit fullscreen mode

Step 2: Develop a function to generate an AWS S3 pre-signed URL

// index.js
const express = require('express')
const app = express()
const AWS = require('aws-sdk')

const s3 = new AWS.S3({
    accessKeyId: process.env.AWS_ACCESS_KEY_ID, // Your AWS Access Key ID
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, // Your AWS Secret Access Key
    region: process.env.AWS_REGION // Your AWS region
    signatureVersion: 'v4', // This is the default value
})

const awsS3GeneratePresignedUrl = async (
  path,
  operation = 'putObject', // Default value is putObject, for get use getObject
  expires = 60
): Promise<string> => {
  const params = {
    Bucket: bucketName, // Bucket name
    Key: path, // File name you want to save as in S3
    Expires: expires, // 60 seconds is the default value, change if you want
  }
  const uploadURL = await s3.getSignedUrlPromise(operation, params)
  return uploadURL
}

app.listen(3000, () => {
    console.log('Server is running on port 3000')
})
Enter fullscreen mode Exit fullscreen mode

Step 3: Configuring AWS S3 bucket

If we try to connect API to our function, we will get a CROS (Cross-Origin Resource Sharing) error. To fix this, we need to configure our S3 bucket to allow access from our API. We want access to PUT and GET requests from our API. To do this, we need to add a CORS configuration to our S3 bucket in the following way:

  • Open the Amazon S3 console at https://console.aws.amazon.com/s3/
  • Search for the bucket you want to configure and click on it
  • Click on the Permissions tab
  • Scroll down to the Cross-origin resource sharing (CORS) section
  • Click on Edit and add the following policy
[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "PUT",
            "GET",
            "HEAD"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]
Enter fullscreen mode Exit fullscreen mode
  • Click on Save changes
  • Also make sure that Block public access (bucket settings) is turned on to keep your bucket private (optional)

Change the AllowedOrigins to your API URL to make it more secure. Or you can also use wildcard * to allow access from any origin.

Step 4: Connecting function to an API endpoint

We will have 2 endpoints, one to generate a pre-signed URL for uploading a file and the other to generate a pre-signed URL for downloading a file.

// index.js
const express = require('express')
const app = express()
const AWS = require('aws-sdk')

const s3 = new AWS.S3({
    accessKeyId: process.env.AWS_ACCESS_KEY_ID, // Your AWS Access Key ID
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, // Your AWS Secret Access Key
    region: process.env.AWS_REGION // Your AWS region
    signatureVersion: 'v4', // This is the default value
})

const awsS3GeneratePresignedUrl = async (
  path,
  operation = 'putObject', // Default value is putObject, for get use getObject
  expires = 60
): Promise<string> => {
  const params = {
    Bucket: bucketName, // Bucket name
    Key: path, // File name you want to save as in S3
    Expires: expires, // 60 seconds is the default value, change if you want
  }
  const uploadURL = await s3.getSignedUrlPromise(operation, params)
  return uploadURL
}

app.get('/generate-presigned-url', async (req, res) => {
  const { path } = req.query
  const uploadURL = await awsS3GeneratePresignedUrl(path, 'putObject', 60)
  res.send({ path, uploadURL })
})

app.get('/download-presigned-url', async (req, res) => {
  const { path } = req.query
  const downloadURL = await awsS3GeneratePresignedUrl(path, 'getObject', 60)
  res.send({ downloadURL })
})

app.listen(3000, () => {
    console.log('Server is running on port 3000')
})
Enter fullscreen mode Exit fullscreen mode

Step 5: Setting up the frontend

For the front end, I am using React.js. You can use any front-end framework of your choice. and install axios to make HTTP requests.

npx create-react-app frontend
cd frontend
npm install axios
Enter fullscreen mode Exit fullscreen mode

Step 6: Connecting frontend to the API

// App.js
import React, { useState } from 'react'
import axios from 'axios'

export default function App() {
  const [uploadURL, setUploadURL] = useState('')
  const [downloadURL, setDownloadURL] = useState('')

  const generatePresignedURL = async (path, type) => {
    const response = await axios.get(
      `http://localhost:3000/generate-presigned-url?path=${path}`,
    )
    if (type === 'upload') {
      setUploadURL(response.data.uploadURL)
    } else {
      setDownloadURL(response.data.downloadURL)
    }
  }

  return (
    <div>
      <input type="file" onChange={(e) => uploadFile(e.target.files[0])} />
      <button onClick={() => generatePresignedURL('file1.txt', 'upload')}>
        Generate Upload URL
      </button>
      <button onClick={() => generatePresignedURL('file1.txt', 'download')}>
        Generate Download URL
      </button>
      {downloadURL && (
        <a href={downloadURL} download>
          Download file
        </a>
      )}
    </div>
  )
}
Enter fullscreen mode Exit fullscreen mode

Use Cases

  1. Upload files to the S3 bucket from your front end without exposing your AWS credentials.
  2. Download files from the S3 bucket to your front end without exposing your AWS credentials.
  3. Directly upload files to the S3 bucket from your front end without creating API to handle file uploads.

Bonus Code Snippet

// Code to get uploadUrl and put the file to the S3 bucket using fetch API.
const putFileToS3Api = async ({ uploadURL, file }) => {
  try {
    if (!file) throw new Error('No file provided')
    const res = await fetch(signedUrl, {
      method: 'PUT',
      headers: {
        'Content-Type': file.type ?? 'multipart/form-data',
      },
      body: file,
    })
    return res
  } catch (error) {
    console.error(error)
  }
}

const getUploadUrlApi = async ({ filename }) => {
  // Update the URL to your Graphql Endpoint.
  const res = await axios.get('http://localhost:3000/generate-presigned-url', {
    path: filename,
  })
  return res
  try {
  } catch (error) {
    console.error(error)
  }
}

export const uploadFileToS3Api = async ({ file }) => {
  try {
    if (!file) throw new Error('No file provided')
    const generateUploadRes = await getUploadUrlApi({ filename: file.name })
    if (!generateUploadRes.data.uploadURL)
      throw new Error('Error generating pre signed URL')
    const uploadRes = await putFileToS3Api({
      uploadURL: generateUploadRes.data.uploadURL,
      file,
    })
    if (!uploadRes.ok) throw new Error('Error uploading file to S3')
    return {
      message: 'File uploaded successfully',
      uploadURL: generateUploadRes.data.uploadURL,
      path: generateUploadRes.data.path,
    }
  } catch (error) {
    console.error(error)
  }
}
Enter fullscreen mode Exit fullscreen mode

Call the uploadFileToS3Api function with the file you want to upload to the S3 bucket in one go. Additionally, you can use await Promise.all to upload multiple files at once.

const uploadFiles = async (files) => {
  const uploadPromises = files.map((file) => uploadFileToS3Api({ file }))
  const uploadResults = await Promise.all(uploadPromises)
  console.log(uploadResults)
}
Enter fullscreen mode Exit fullscreen mode

Additional Resource

More information about AWS S3 presigned URLs can be found here

Conclusion

That's it. You have successfully learned how to generate pre-signed URLs for uploading and downloading files from the S3 bucket while keeping the bucket private.
I hope you find this blog helpful. If you have any questions, feel free to ask in the comments below or contact me on Twitter @thesohailjafri

Comments 15 total

  • sahilchaubey03
    sahilchaubey03Mar 18, 2024

    Great article bother..... Helped me a lot

  • Prathamesh Ethiraj
    Prathamesh Ethiraj Mar 18, 2024

    Best article ever i went through 💯

  • Rahul Chaurasia
    Rahul ChaurasiaMar 18, 2024

    I read multiple articles related to aws signed urls and I would say that this one pieces them togehter.
    Good read!

    • Sohail SJ | TheZenLabs
      Sohail SJ | TheZenLabsMar 18, 2024

      Thanks brother, I will try to update the article with more details on security

  • Mike Stemle
    Mike StemleMar 18, 2024

    There are a number of security problems here.

    Never user plaintext AWS credentials

    1. You shouldn't have plaintext AWS credentials in memory for your running web server. If you do, and someone is able to successfully inject code into your running server process, or otherwise dump heap, they could exfiltrate your credentials
    2. Using plaintext credentials like this is often a recipe for never changing the credentials, which increases the harm an attacker could cause if those credentials are ever leaked.
    3. If you're running your server in AWS, an execution role or instance profile should attach to an IAM policy document which gives you access. It is significantly safer to use IAM roles and STS than it is to use IAM users.
    4. AWS IAM best practices discourage the use of IAM users: docs.aws.amazon.com/IAM/latest/Use...

    Don't have public buckets

    AWS is usually pretty clear about the risks of not having buckets configured to block all public access. It is far more secure to have your users upload the file to a server which then performs an s3:PutObject call.

    If someone is able to get your service to give them signed URLs for uploading contents, you may very quickly find harmful files uploaded to your bucket.

    Your CORS settings invite SSRF and CSRF

    Your axios error handling is insecure

    The axios module includes all authorization headers in the error object it returns, so your console.error() will log sensitive information.

    Finally, you're using the old version of the AWS SDK

    AWS-SDK v2 is being deprecated soon: aws.amazon.com/blogs/developer/ann...

    The V3 of the SDK is pretty easy to use, and the nice thing about the change is that it's going to be similarly functional across all of the various libraries (e.g. Rust, Java, JavaScript), using similar patterns. Gone will be the days of language-specific AWS SDK patterns.

    • Sohail SJ | TheZenLabs
      Sohail SJ | TheZenLabsMar 18, 2024

      Damn, thank you, I really mean it. Thats a lot to take but I will study one by one on the points you mentioned and try to improve my practice in existing and upcoming projects 🤝🙌😬

      • Sohail SJ | TheZenLabs
        Sohail SJ | TheZenLabsMar 18, 2024

        During the production setup mostly the API routes stay under the auth middleware but I guess I can move the entire logic on the server which will take a single/multiple files and return the uploaded paths to keep it modular this I don't compose my bucket in any way

      • Mike Stemle
        Mike StemleMar 18, 2024

        I very much appreciate you receiving that well. Security is hard, and security in the cloud is harder. There are a lot of tools but it's really hard to keep up with all of them.

        If it helps, I never use IAM users. For humans, we should use federated authentication using something like Okta, or Auth0, and for infrastructure running code in AWS we should use execution roles or instance profiles.

        Nobody can steal credentials which do not exist, or (in the case of AWS STS) are ephemeral and expire quickly.

        • Sohail SJ | TheZenLabs
          Sohail SJ | TheZenLabsMar 24, 2024

          Okay understood, I will try to practice using okta or Auth0 for future projects

          • Lee
            Lee Apr 3, 2024

            Wholesome exchange right here.... Love this community 🙌

    • Marek Krzyżowski
      Marek KrzyżowskiNov 6, 2024

      @manchicken
      1) So how should I store them? How do you store them?

      • Mike Stemle
        Mike StemleNov 6, 2024

        Fetch them at runtime and discard them when you no longer need them. Also, use instance profiles and STS when possible, avoiding having long-lived secrets in the first place.

        In 2024, there is no good reason to rely on access tokens and user passwords in AWS.

  • Lee
    Lee Apr 3, 2024

    Nice share! 🙌

Add comment