Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

How to serve a static file directory with uWS#1003

AnsweredbyuNetworkingAB
JemiloII asked this question inQ&A
Discussion options

So for this next project I'm doing, I really want to use uWS for https and wss. Most of my projects have the two split unfortunately. This project is much simpler but I don't see how with uWS to serve static content. I don't want to have to make a wildcard and do file lookups. I just want a simple thing like express, where I just do.static(route). I have a couple image directories, audio directory, and an angular build dir I want to make static. Does uWS have a simple way of doing this?

You must be logged in to vote

App.static(path) could definitely be a feature to look at some point but not right now. uWS doesn't exist where it doesn't provide exceptional value, and so if it would have app.static, it would have to be one of the best performing such features. Right now, that's not the case, since it would require kTLS and sendfile which would lock it to Linux only, which is not the plan. So it's more than just an opinion - I don't want to add features which aren't obviously motivated. Esp. not since most companies use proxies and proxies have static file serving built-in

Replies: 14 comments 32 replies

Comment options

No but it could possibly be done at some point, maybe.

You must be logged in to vote
0 replies
Comment options

Why not using Nginx for static files?

You must be logged in to vote
0 replies
Comment options

Why not cloudflare cdn? They also have websockets support among plenty of other things

You must be logged in to vote
0 replies
Comment options

@JemiloII You can look athere for ways to serve file/directory. Or use Cloudflare CDN as suggested by@nickchomey

You must be logged in to vote
0 replies
Comment options

Can't use cloud flare as a cdn, my website generates images and other content. It would be too slow to upload to a cdn and then serve. Faster to just serve from the server.

I don't want to use ngnix to serve static files. I just want to use uWS. I don't want to mess with proxies, on a server instance just to serve content on port 80/443. For my experience with Cloudflare, it doesn't play nice with sockets on other ports if you're not a pro member. Even as a pro member, the sockets for my use cases when proxies perform better on those ports.

You must be logged in to vote
0 replies
Comment options

I think you're misunderstanding a cdn. If you proxy your dns through cloudflare, it will automatically cache all static assets when requested and serve those automatically to future visitors. Not only that, but it gets served from "the edge" - their 300+ datacenters that are much closer to your visitors than your server will be.

Html needs to be specifically selected for caching though - they have docs on it.

They also offer Cloudflare Pages for static sites. It might suit your needs.

You must be logged in to vote
0 replies
Comment options

App.static(path) could definitely be a feature to look at some point but not right now. uWS doesn't exist where it doesn't provide exceptional value, and so if it would have app.static, it would have to be one of the best performing such features. Right now, that's not the case, since it would require kTLS and sendfile which would lock it to Linux only, which is not the plan. So it's more than just an opinion - I don't want to add features which aren't obviously motivated. Esp. not since most companies use proxies and proxies have static file serving built-in

You must be logged in to vote
1 reply
@Mupli
Comment options

I thought that this project is used only by smaller teams or geek guys. Companies usually have teams and use nodejs because they don't care about tech they just want to have something "stable" without SPOF (sorry:0).

Answer selected byuNetworkingAB
Comment options

App.static(path) - anyone who needs such an interface can and should use a proxy.

The whole point is why we can't use a proxy and why the file sending interface should be different.

We check the user's access rights to this file, i.e. before sending the file, we make queries to the database and determine whether the user can receive this file or not, plus private caching headers and give 304 status if clients have the file in the local cache, plus the real path to the file is often much more complex and private than a string request.

Thus, it is much more useful to have a methodresponse.sendFile(path, range optional) that we independently call inside our route handlers, this method is much smarter and more flexible.

The effectiveness of this method is important only in Linux (production) in Windows and macOS this method does not have to work very quickly and efficiently, because these operating systems are not for production but for development

Thanks.

You must be logged in to vote
1 reply
@uNetworkingAB
Comment options

I agree, you can make App.static using parameter route and sendFile

Comment options

Just started looking at this library and made a minimal file handler that worked for my case. I pushed it to a repohere

importfsfrom'node:fs/promises';importmimefrom'mime-types';constsendFile=(filePath,res)=>{res.onAborted(()=>{res.aborted=true;});console.log(`send file ->${filePath}`)fs.readFile(filePath).then(data=>{if(!res.aborted){res.cork(()=>{res.writeStatus('200')res.writeHeader('Content-Type',mime.lookup(filePath))res.end(data)})}}).catch(err=>{console.log(err);if(!res.aborted){res.cork(()=>{res.writeStatus('404')res.end()})}})}export{sendFile}
You must be logged in to vote
6 replies
@erf
Comment options

yeah should probably use a stream like in the examplehere, but seem to work for my use case (just a smaller js files) for testing locally (do you know the size limit?), but i agree you should probably use nginx or similar in production.

@uasan
Comment options

if you have small files and they are immutable, then when the server starts, scan the folder with the files, save all the files in the local JS cache, and always give the respond from the JS cache, you don’t even need promises

@erf
Comment options

good idea - the "local JS cache" is just a map in memory you define i guess

@erf
Comment options

i added something like thathere :)

@uasan
Comment options

forfileCache useMap anddata.toString() no need to do this, save binary buffer and compress it with gzip if it is text files

Comment options

It would be nice if zero-copy support would be set. (Interesting read:https://lwn.net/Articles/726917/).

Note: And everyone suggest to setup proxy - nginx or cloudflare is the way. I don't fully agree with this. Setting additional proxy will add code/complexity to infrastructure and things to maintain. For smaller.. monolith/one-person projects it will be a problem to maintain.
My one project is dead because of proxy/nginx code. I would like to deploy it again but setting raw nginx from scratch is not easy as "npm start". (I fell pain when I think about all nginx configs.)

Note2: All frameworks that are based on uWS are doing some manual work there (better or worse). It would be nice to have native support.

Note3: Not sure how uWS is working internally but I treat it as a "hyperfast js server", adding super fast methods for files would just move uWS toward that goal.

Ps. sorry for the English.

You must be logged in to vote
0 replies
Comment options

If you don't use CDN and proxy servers right away during development (and don't use a DB cluster during development), then mind-blowing fucked up will happen in production if the traffic flow exceeds the server's capabilities. It will be hell. And this hell will happen in production, and each of your programmers will have a burning ass.

Use a proxy/content server from the very beginning of development. Separate proxy/content server. It should route traffic to your main servers (or a single server in the beginning) and can be a static file server. Yes, this adds complexity, but it must be done.

Of course, it would be nice if uWS also had the functions of an ultra-fast proxy/content server, reducing the stack of used libraries, but this... Is this really necessary for this library? It's hard to guess. May be. Maybe not.

You must be logged in to vote
3 replies
@Mupli
Comment options

still you can always have cloudflare proxy, and setup it in 10 minutes from scratch. Though you wouldn't need to change you approach on server.
My aim is to have this option for those who need it. I'm lazy and I'm trying to have nice monolith to maintain everything on one server. GDPR and cookie files etc. why should I setup S3 for files or other http serving place for this or for some static html.

@uNetworkingAB
Comment options

uWS will probably get a cache at some point. Maybe as a paid feature (along with other features). With a cache, all file serving routes could be shitty but still fast in hot path

@Mupli
Comment options

too soon to be paid. You need to get big players first.

Comment options

I'm kinda surprised this is still going. All I really was hoping to achieve was to put uws on small box and have it serve as my backend, sockets, and static file path with minimal setup/config so I use the same ssl ports without having to put uws on a different port or have to use some kind of reverse proxy.

You must be logged in to vote
3 replies
@uNetworkingAB
Comment options

I'm kinda surprised this is still going.

I don't want to have to make a wildcard and do file lookups. I just want a simple thing like express, where I just do .static(route)

Ok. Then use Express. It really is that simple.

uWS does not have any plan to add app.static unless a strong motivation comes along. And that motivation is not going to be you, writing that you are surprised nothing happened.

@JemiloII
Comment options

I already know you guys aren't going to do that. I'm just amused that this is still going on after you let us know that you guys aren't doing it. More like the thread should just be closed and we add an note in an FAQ about it.

@uNetworkingAB
Comment options

It could be done at some point, it's a valid idea.

But it isn't exactly simple to do, if you want it well done. uWS does not add half-assed solutions. You need sendfile and kTLS to make it anywhere near the best. kTLS is a major feature.

I hear you say; "but I don't care about the best solution I just want it to be simple", ok, well again then use Express. Express definitely is simpler than uWS.

Comment options

What restrictions generally exist for sending static content via res.end() and what are they related to, blocking? Is res.end() limited to kilobytes, tens of kilobytes, or strings up to a thousand characters long?
I don’t really understand why you can’t use res.end() for most of the content (css, js, html) and use res.onWritable() for everything else.

You must be logged in to vote
18 replies
@Msfrhdsa
Comment options

Oops, that was a stupid question, I was able to cause backpressure, the problem was in my proxy server.

Add this to your code:

Memory tracking in GC environments is quite difficult. Maybe it's not uWS but a nodejs error, or maybe something is happening in v8, GC or something else. You probably need some special solutions for analyzing this, but what’s the point of all this, fuck it.

@uNetworkingAB
Comment options

@Msfrhdsa That (your variant) function is not correct and will send corrupt data to end users. You haven't tested it enough to stress the case where tryEnd consumed part of the chunk.

@uNetworkingAB
Comment options

Have you considered just asking ChatGPT if you don't understand? I asked it, and it clearly understands it:

ThepipeStreamOverResponse function is a helper function designed to pipe data from a readable stream (readStream) over an HTTP response (res). This function is typically used in web servers to send large files or data streams to the client efficiently. Here's a breakdown of the function:

Parameters

  • res: The HTTP response object, which is used to send data back to the client.
  • readStream: A Node.jsReadableStream from which data is read.
  • totalSize: The total size of the data being streamed. This is used to indicate the progress of the streaming process.

Process

  1. Handling Data Events:

    • The function listens fordata events on thereadStream. When data chunks are available, they are processed one by one.
    • Each chunk of data is converted to an ArrayBuffer using thetoArrayBuffer(chunk) function (not defined in the provided code, but presumably converts the chunk to a format suitable for sending over the response).
    • The current write offset in the response is stored usingres.getWriteOffset().
  2. Sending Data:

    • ThetryEnd method on theres object attempts to send the data chunk to the client. It returns two values:
      • ok: Indicates whether the chunk was successfully sent.
      • done: Indicates whether this chunk was the last one to send.
    • Ifdone istrue, the function callsonAbortedOrFinishedResponse to handle the end of the response (this function is not defined in the provided code but typically handles cleanup and completion).
  3. Handling Flow Control:

    • If a chunk cannot be sent immediately (ok isfalse), thereadStream is paused to prevent reading more data.
    • The chunk and its offset are saved in theres object for later use.
    • Theres.onWritable event is used to listen for when the response can accept more data. When triggered, the function attempts to send the remaining data starting from the last unsent position.
    • TheonWritable callback must returntrue orfalse to indicate whether the data was successfully sent or not.
  4. Error Handling:

    • If an error occurs in thereadStream, a simple error message is logged. The function suggests that error handling should be implemented here, such as closing the response or other cleanup actions.
  5. Handling Aborted Responses:

    • The function sets up anonAborted handler on theres object. This handler will be called if the response is aborted (e.g., if the client disconnects), ensuring proper cleanup via theonAbortedOrFinishedResponse function.

Key Considerations

  • Flow Control: This function carefully manages the flow of data to avoid overwhelming the client or the network. It uses mechanisms like pausing the read stream and handling backpressure throughonWritable.
  • Error and Abort Handling: Although error handling is not fully implemented, there are placeholders indicating where such logic should be added.
  • Performance Considerations: The function is designed to handle streaming efficiently, even with large data sizes, by sending data in chunks and managing the connection state.

Overall, the function is a robust solution for streaming data over HTTP in a Node.js environment, ensuring efficient data transfer and handling potential issues like backpressure and client disconnects.

@uNetworkingAB
Comment options

why there is an offset in onWritable in VideoStreamer.js. It is always equal to res.getWriteOffset() and res.abOffset

No, it isn't. They certainly can be different, and will be different in production stress.

@uNetworkingAB
Comment options

ChatGPT telling you why (you should really ask ChatGPT instead of having confident outbursts here)

Why Offsets Are Used

  1. Handling Partial Writes:

    • When a chunk of data is sent usingres.tryEnd(ab, totalSize), the function might not be able to send the entire chunk in one go due to network conditions or limitations on the client's side. This can result in partial writes.
    • Theoffset in theonWritable callback represents the position in the current chunk where the next write should begin. This is essential to ensure that any partially sent data is not resent, which would lead to duplication and errors.
  2. Backpressure Management:

    • Backpressure occurs when the rate of incoming data (from the readable stream) exceeds the rate at which the data can be sent out (through the response). To manage this, thereadStream is paused when thetryEnd method returnsok asfalse.
    • Theres.getWriteOffset() method returns the current write offset, which indicates how much of the response has been sent. This helps in determining how much data has been successfully written so far.
    • Theres.abOffset stores the offset at which the current chunk (res.ab) was first attempted to be written. Together withres.getWriteOffset(), it helps in calculating the correct slice of the data to resend.
  3. Correct Data Handling:

    • InonWritable, the function checks the difference betweenres.getWriteOffset() andres.abOffset. This difference tells us how much of the previously stored buffer has been sent.
    • The data chunk is then sliced starting from this offset and passed again totryEnd. This ensures that only the unsent portion of the data is retried, maintaining data integrity.

Why It Might Seem Redundant

It might seem that using bothres.getWriteOffset() andres.abOffset is redundant since they are typically expected to represent the same point in the stream if everything is working perfectly. However, these offsets are crucial for the following reasons:

  • Edge Cases: There could be situations where the response might not be able to send all the data in one go, especially under high load or network fluctuations. The offsets ensure that any data that couldn't be sent due to such conditions is correctly handled.
  • Stream Pauses and Resumes: When thereadStream is paused, theonWritable event provides a mechanism to resume the stream once the response can accept more data. Offsets help in ensuring that the data flow resumes correctly from the last unsent byte.
  • Data Integrity: By keeping track of bothres.getWriteOffset() andres.abOffset, the function ensures that it neither skips any data nor sends any data twice.

In summary, the use of offsets in theonWritable callback is a safeguard to handle partial writes, backpressure, and network reliability issues, ensuring smooth and correct data streaming to the client.

Comment options

Guys@erf@uNetworkingAB, I am trying to serve html and it loads the page. But i am unable to load images that are stored in cloud.

You must be logged in to vote
0 replies
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
questionFurther information is requested
11 participants
@JemiloII@erf@dalisoft@cerjs@Msfrhdsa@uasan@sailingwithsandeep@e3dio@nickchomey@Mupli@uNetworkingAB
Converted from issue

This discussion was converted from issue #997 on December 24, 2023 04:34.


[8]ページ先頭

©2009-2025 Movatter.jp