Signed URLs expiring mid-download: UX and resume patterns
Level: intermediate · ~15 min read · Intent: informational
Audience: developers, ops engineers, product teams, data teams, technical teams
Prerequisites
- basic familiarity with HTTP and downloads
- basic familiarity with signed or presigned URLs
- optional familiarity with S3 or CloudFront
Key takeaways
- For S3 presigned URLs, the crucial behavior is request-time validation: if the download starts before expiry it can continue, but a restarted request after expiry fails.
- Large-download UX should assume that resume attempts may require a fresh signed URL, especially when the client reconnects after the original link has expired.
- Range requests are a core recovery primitive because they let clients fetch only missing byte ranges instead of restarting the whole object.
- The safest architecture for large or sensitive downloads is often a brokered flow that can mint fresh URLs or controlled redirects on demand instead of handing the browser one long-lived origin URL and hoping for the best.
References
- Amazon S3 — Download and upload objects with presigned URLs
- Amazon S3 on Outposts — Presigned URL expiration behavior
- Amazon S3 — Performance guidelines
- Amazon S3 GetObject API
- Amazon CloudFront — How CloudFront processes partial requests for an object
- Amazon CloudFront — Use signed URLs
- Amazon CloudFront — Use signed cookies
- RFC 9110 — HTTP Semantics
FAQ
- What happens if a signed URL expires during a download?
- For S3-style presigned downloads, a transfer that starts before expiration can usually continue. But if the connection drops and the client must start a new HTTP request after expiry, that new request fails.
- Can a browser resume a large download after the original signed URL expires?
- Not with the same expired URL. Resume usually requires a new signed URL or a brokered endpoint that can authorize and hand back a fresh request target.
- Why do Range requests matter for signed downloads?
- Range requests let the client fetch only the missing part of a file instead of restarting from byte zero. That reduces wasted bandwidth and makes resume flows practical.
- Should I just increase signed URL expiry times?
- Usually not by default. Longer expiries reduce friction but increase exposure. A better pattern is often shorter expiries plus refreshable or brokered resume logic.
- When is CloudFront a better fit than direct origin signed URLs?
- CloudFront is often a better fit when you want edge delivery, partial content handling, stronger distribution controls, or a clean separation between your browser UX and your origin URL design.
Signed URLs expiring mid-download: UX and resume patterns
Signed URLs feel simple until a large download fails halfway through.
A user clicks a link. The file starts downloading. Network conditions shift, the browser pauses, the laptop sleeps, or the connection drops. Then the user tries to resume — and the transfer fails even though the download started successfully before.
That is where teams discover that “signed URL expiry” is not just a security setting. It is a product and UX design problem.
This page is built for the real search intent around that problem:
- signed URL expires during download
- S3 presigned URL resume fails
- resume large download after signed URL expiry
- presigned URL mid-download behavior
- CloudFront signed URL resume download
- Range requests for expiring download links
- large file download UX with signed URLs
- refresh token pattern for download URLs
- why download started before expiry but resume fails later
The most important behavior to understand is this:
signed URLs are checked at request time, not as a continuous streaming entitlement.
That single point explains most of the weirdness users and support teams see.
The key behavior: start before expiry can work, resume after expiry can fail
AWS documents this very clearly for S3-style presigned downloads.
When Amazon S3 checks a presigned URL, it evaluates expiration at the time of the HTTP request. If a client begins downloading a large file immediately before the expiration time, the download can continue even if the expiration time passes during the transfer. But if the connection drops and the client tries to restart the download after the expiration time has passed, that new request fails.
That is the core fact that should shape your UX.
It means a signed-download experience has two very different states:
State 1. In-flight request that already started
This can keep going.
State 2. New request after expiry
This fails unless you obtain a fresh signed URL or a different authorized path.
A lot of product confusion comes from treating those as the same thing. They are not.
Why this matters more for large files
Short downloads rarely expose the issue because they finish before expiry windows become relevant.
Large files change everything:
- mobile networks fluctuate
- browsers pause background tabs
- laptops sleep
- users lose Wi-Fi and reconnect
- corporate proxies interrupt long transfers
- downloads are restarted by the browser or the user
- users try to resume instead of starting over
That means large-file signed-download UX has to be designed explicitly, not treated as a basic “click link to file” flow.
The second key behavior: Range requests make recovery possible
Range requests are a core part of solving this problem.
RFC 9110 defines 206 Partial Content as the response used when a server successfully fulfills a range request. AWS CloudFront’s range request documentation explains that viewers can use the Range header to download objects in smaller parts and recover from partially failed transfers. AWS S3’s performance guidance also recommends byte-range fetches for large objects because smaller ranges improve retry times when requests are interrupted.
This matters because a resilient resume flow should not always restart at byte zero.
If the user already has:
- 600 MB of a 1.2 GB file and the connection breaks, then a good client can try to request only the missing range instead of starting again.
That is the foundation of practical resume behavior.
Why the default browser download flow is often not enough
A plain download link handed directly to the browser is easy to build and hard to improve later.
Problems:
- the browser owns most retry behavior
- your app often cannot distinguish “slow” from “expired”
- support cannot tell whether failure came from network loss or URL expiry
- resuming may depend on browser-specific behavior
- refreshing the signed URL is hard once the browser is already outside your app flow
That is why large signed-download flows often need one of these stronger patterns:
- a brokered download endpoint
- a refreshable download manifest
- segmented or chunked range-aware clients
- or a CloudFront-based delivery layer with cleaner control over partial requests
Pattern 1: the basic signed-link handoff
This is the simplest pattern:
- user clicks download
- backend generates a signed URL
- browser is redirected or given the URL
- browser downloads directly from storage or CDN
Pros
- simple
- scalable
- origin bandwidth can be efficient
- easy to implement quickly
Cons
- hard to refresh if the link expires
- poor control over resume UX
- weak support ergonomics
- difficult to distinguish link expiry from transient failure
- browser behavior varies
This pattern is fine for:
- small files
- low-friction internal tools
- short expiry windows where restart cost is tiny
It is weaker for:
- large exports
- unreliable client networks
- high-value downloads
- consumer-facing large-file flows
Pattern 2: signed link plus refresh-on-failure
This is often the best next step up.
Flow:
- app creates a short-lived signed URL
- client starts download
- if download fails or resume is attempted after expiry, client calls your app
- app verifies the user still has permission
- app issues a fresh signed URL
- client retries from the missing byte range if possible
This pattern is strong because it keeps expiry short without forcing users to restart blindly.
What it needs
- a stable file identity
- byte progress tracking or browser/client support for resume
- a way to request a new signed URL
- permission checks that do not assume the original URL is still valid
- useful error messaging
Good UX message
Instead of:
- “Download failed”
say something like:
- “Your secure download link expired while the transfer was paused. We generated a fresh link and resumed the remaining download.”
That explanation reduces support confusion immediately.
Pattern 3: brokered download endpoint
This is the most product-friendly pattern in many apps.
Instead of exposing the storage signed URL directly as the main user-facing link, the app exposes a stable app endpoint such as:
/downloads/{jobId}
That endpoint:
- checks user authorization
- resolves the current file state
- issues or refreshes a signed URL
- redirects or streams accordingly
- records telemetry
- can return a resumable plan if needed
Why this is strong:
- the browser or client always comes back to your app-level contract
- you can swap storage providers or signing mechanics later
- support tooling becomes simpler
- retry UX becomes explicit
- permission can be rechecked at every new request
The cost is additional backend design, but for serious download products it is often worth it.
Pattern 4: segmented download clients
For very large files, an advanced pattern is to download in parts using Range requests intentionally.
AWS notes that S3 supports byte-range fetches and concurrent connections, and CloudFront documents partial request handling for the same reason.
A segmented client can:
- request fixed-size chunks
- track completed byte ranges
- request only missing ranges after interruption
- refresh signed URLs between chunks if needed
- provide much better progress reporting
- reduce wasted bandwidth
This is more work than a simple anchor tag. It is also much more robust for:
- multi-gigabyte downloads
- shaky mobile connections
- enterprise networks
- or exports that users routinely pause and resume
Pattern 5: CloudFront signed delivery instead of direct origin URLs
CloudFront signed URLs are often a better fit when:
- you want edge delivery
- you want better behavior for partial requests
- you want a distribution layer separate from your origin URL design
- you want more flexible private-content access patterns
CloudFront documents signed URLs as a way to control access to private content, and also documents how partial requests improve the efficiency of partial downloads and recovery from partially failed transfers.
That does not magically solve expiry. But it can improve the overall delivery model, especially when combined with:
- a stable app entrypoint
- edge-friendly range support
- clearer logging
- and CDN-based performance benefits
For some systems, the best answer is:
- app endpoint for authorization and orchestration
- CloudFront for delivery
- signed URL or cookie at the edge
- range-aware resume logic in the client
UX patterns that reduce support pain
Technical behavior is only half the problem. Users need a flow that makes failure understandable.
1. Tell users the link is time-limited
Do not hide the fact that the link is temporary if it matters for their behavior.
2. Differentiate “download interrupted” from “access expired”
Those are different remedies.
3. Offer an explicit retry or refresh action
Do not force users to rediscover the original export page if the file still exists and permissions still hold.
4. Preserve file identity across retries
The user should feel like they are continuing one download, not starting a mystery second process.
5. Show byte progress when possible
If the client can resume from byte ranges, the messaging should reflect that.
6. Avoid blamey wording
Prefer:
- “The secure download link expired while resuming” over:
- “Invalid URL”
That small difference matters.
Resume logic: what a strong implementation tracks
A resilient signed-download system usually tracks some combination of:
- export job ID
- object key
- content length
- ETag or equivalent version identifier
- whether ranges are supported
- bytes already written
- time remaining before the current link expires
- whether the file is still authorized for the user
That tracking helps avoid bad resume behavior such as:
- resuming against a changed file
- requesting the wrong range
- or refreshing a signed URL for a stale export artifact
Conditional or identity checks such as ETag validation can help ensure the resumed download still corresponds to the intended object state.
Error handling patterns that work
A good error model separates:
Network interruption
Retry with the same signed URL if it is still valid and the request can continue.
Expired signed URL
Refresh authorization, obtain a new URL, then retry the missing range.
File no longer available
Tell the user the export is no longer retained and offer regeneration if possible.
Permission changed
Do not resume silently. Recheck authorization and explain the access change clearly.
Range not supported
Fall back to full restart or switch to a brokered flow that supports range-based delivery.
These states should not collapse into one generic “download failed” bucket.
Should you just make the URL last longer?
Usually not as the default answer.
Longer-lived URLs reduce friction, but they also:
- increase exposure if the link is leaked
- reduce your control surface
- encourage casual sharing
- blur the difference between authenticated access and bearer access
- and can still fail if the underlying credentials expire earlier
A better default is often:
- shorter-lived signed URLs
- plus a reliable refresh and resume story
That keeps security tighter while preserving usability.
A practical architecture checklist
Use this when designing the feature.
1. Confirm request-time expiry behavior in your chosen platform
Do not design from assumptions. For S3, the key rule is start-before-expiry can continue, restart-after-expiry fails.
2. Decide whether plain browser handoff is enough
For small files, maybe yes. For large files, often no.
3. Support Range requests where practical
This is the foundation of efficient resume behavior.
4. Track stable file identity
Use object identity and versioning cues so resumes target the right file.
5. Separate permission from transfer state
The expired URL should not force the user to lose all context if they are still authorized.
6. Add a refreshable path
Either a broker endpoint or a client refresh flow.
7. Instrument failures carefully
Log:
- start times
- expiries
- byte counts
- retry attempts
- refresh success or failure
- range requests
- final completion
8. Write user-facing copy for the real failure states
Support messaging is part of the architecture.
Common anti-patterns
Anti-pattern 1. One giant signed URL and no recovery plan
Works in demos, breaks in the real world.
Anti-pattern 2. Treating resume as “the browser will handle it”
Sometimes it will, sometimes it will not, and you may have no visibility.
Anti-pattern 3. Very long expiries as the only UX fix
This trades usability for weaker control.
Anti-pattern 4. Restarting from zero on every failure
Expensive, frustrating, and avoidable when ranges are available.
Anti-pattern 5. No stable app-level download identity
Makes refresh and support much harder.
Anti-pattern 6. Logging everything except the failure cause
Support teams need to know whether the request expired, the network failed, or the file changed.
Which Elysiate tools fit this topic naturally?
The most natural related tools for this topic are your CSV-side validation tools, because many signed downloads ultimately serve export artifacts that users then trust downstream:
The operational pairing is useful:
- deliver files reliably
- and make sure the delivered files are structurally sane when users open them
Why this page can rank broadly
To support broader search coverage, this page is intentionally built around several connected query clusters:
Core signed-link behavior
- signed URL expires during download
- presigned URL mid-download
- resume after signed URL expiry
UX and retry intent
- large file download retry UX
- refresh expiring download links
- secure download resume pattern
HTTP and delivery intent
- Range requests for downloads
- 206 Partial Content resume download
- CloudFront partial requests signed URLs
Architecture intent
- brokered download endpoint
- app-generated download tokens
- signed URL refresh flow
- CloudFront vs direct S3 download UX
That breadth helps one article rank for more than one literal phrase.
FAQ
What happens if a signed URL expires during a download?
For S3-style presigned downloads, a transfer that starts before expiry can continue. But if the connection drops and the client must make a new request after expiry, that new request fails.
Can a client resume after the original signed URL expires?
Not with the same expired URL. A resumed request usually needs a fresh signed URL or a brokered app endpoint that can authorize a new request.
Why are Range requests important here?
They let the client request only the missing byte range instead of restarting the entire file, which is critical for large or unstable downloads.
Should I just increase the URL lifetime?
Usually not as the only fix. Longer lifetimes reduce friction but weaken security. A shorter-lived URL with a good refresh and resume flow is often a better design.
When does CloudFront help?
CloudFront helps when you want edge delivery, strong support for partial requests, and a cleaner separation between your user-facing UX and origin file delivery.
What is the safest default mindset?
Treat signed URLs as temporary transport authorizations, not as the whole download product. Design for interruption, refresh, resume, and support visibility from the beginning.
Final takeaway
Signed URLs expiring mid-download are not just a backend quirk.
They are a UX design constraint.
The safest production baseline is:
- understand request-time expiry behavior
- assume resumed requests may need fresh authorization
- use Range requests for efficient recovery
- keep a stable app-level download identity
- instrument failures clearly
- and avoid relying on one raw signed link as the entire user experience for large downloads
That is how you turn expiring signed URLs from a support headache into a controlled download flow.
About the author
Elysiate publishes practical guides and privacy-first tools for data workflows, developer tooling, SEO, and product engineering.