0

I have N input streams (queues) and N corresponding connections. A scheduler thread scans the incoming requests, prioritizes them based on some criteria, and sends them out on the respective connections (request from in queue x goes on connection x). Similar thing happens on the receive side as well.

In a traditional setting, one would create the connections with O_NONBLOCK. If the write to a connection is going to block, leave the request in the input queue, and revisit the queue later so the scheduler thread is not blocked on one slow connection.

Is such a thing possible with tokio::net::TcpStream, etc? It looks like there was tokio::io::{TryRead, TryWrite} in the past that got removed.

One option might be to create an out queue per-connection, with a dedicated task per out queue. This just dequeues and does write_all().await on the connection. This creates one more hop, and adds to the complexity. Makes me wonder if Tokio is the right choice for this application.

Shepmaster
  • 326,504
  • 69
  • 892
  • 1,159
rusty
  • 677
  • 1
  • 4
  • 9
  • 1
    The point of tokio is that everything is asynchronous - rather than using `try_read`, you use read().await. You can just loop through the TcpListener and for each new connection, spawn a task to handle each connection. tokio tasks are lightweight (a statemachine on the heap), and respond to appropriate epoll events, so this is very efficient. Tokio architecture is very different from writing classical C network applications. – Richard Matheson Apr 14 '20 at 11:47
  • We've [already told you](https://stackoverflow.com/questions/61088639/do-i-need-to-move-away-from-tokio-as-i-cannot-split-streams-in-tls-connections#comment108074442_61088639) that Tokio uses non-blocking sockets by default and that non-blocking is basically the entire point of the async / await syntax and futures in general. – Shepmaster Apr 14 '20 at 13:34
  • *It looks like there was `tokio::io::{TryRead, TryWrite}` in the past that got removed* — please link to where you found this. It's entirely possible that we could follow the pull requests that removed it to determine what the appropriate replacement would be. – Shepmaster Apr 14 '20 at 13:37
  • Thanks all. Re: "Tokio uses non-blocking sockets by default ..." - I get the executor/reactor pattern. This case is slightly different: I don't want the task to yield and go into a queue waiting for the waker. But instead, keep going if the operation won't complete immediately. I know this is bit of hybrid case, where we don't want to do `await` Also, the underlying mio objects are not accessibly via the tokio::net::* structs, so checking this does not look possible. – rusty Apr 14 '20 at 17:39
  • Re: TryRead/TryWrite - sorry I was mistaken, this was apparently in the underlying mio, and not in the tokio::net::*: https://github.com/tokio-rs/mio/issues/512 Also, I see this: http://rust-doc.s3-website-us-east-1.amazonaws.com/tokio/master/tokio/io/index.html - is this some alternate rust implementation? – rusty Apr 14 '20 at 17:42

0 Answers0