• 0 Posts
  • 11 Comments
Joined 1Y ago
cake
Cake day: Jun 21, 2023

help-circle
rss

Inline consts also let you perform static assertions, like asserting a type parameter is not a zero-sized type, or a const generic is non-zero. This is actually pretty huge since some checks can be moved from runtime to compile time (not a lot of checks, but some that were difficult or impossible to do at compile time before).


Maybe it’s just me, but the majority of programmers I’ve worked with don’t even know how to quit vim, let alone use it for programming. I wonder if the demographic who completed the survey accurately represents all the people who use Rust, or only those most passionate about the language. It’s also possible that ~30% of Rust programmers do actually use vim (and friends) and represent a different group of programmers than the ones I’ve worked with (who use more traditional programming languages).

Nothing against vim of course. vim is a great editor.


For shared, immutable lists, you can also use Arc/Rc in place of Box! I guess what’s surprising is that [T] (and str/Path/OsStr/etc) are traditionally associated with shared borrows (like &[T]), but they’re really just unsized types that can be used anywhere there’s a ?Sized bound.

Also, overuse of lists (as opposed to other data structures) seems to be a common problem in all programming languages among less experienced devs since they’re simple enough to use in most places. Not saying that’s what happened to the author though - these types are way less obvious than something like a hash set or queue.


For very simple backends, it’s very unlikely you’ll get any significant number of bugs with an experienced team, and if performance isn’t really a concern, then Rust being faster isn’t really relevant. For anything more complex than a simple backend, I’d agree that Rust becomes a lot more appealing, but if you just need to throw together something that handles user profiles or something in a very simple manner, it really doesn’t make a difference what language you do it in as long as you write a few tests to make sure everything works.


This highly depends on what it is you’re trying to build. If it’s a simple CRUD backend + database, then there’s really no reason to use Rust except if you just want to. If it’s doing heavy computation, then you’d want to benchmark both and see if any potential gains by writing it in Rust are worth the effort of using Rust over Node.js.

Practically speaking, it’s really uncommon to need to write a backend in Rust over something like JS or Python. Usually that’s only needed for high throughput services (like Cloudflare’s proxy service which handles trillions of daily requests), or ones performing computationally expensive work they can’t offload to another service (after benchmarking first of course).


One thing to keep in mind is that tracing works on spans/events, so rather than the subscriber receiving a string log message, it’s receiving some metadata and a collection of field/value pairs, where values can be a lot of different types. You may need to determine ahead of time which fields you want deduped (or which you don’t want deduped).


In this case, I don’t think Sink will let you selectively remove sources (although you can clear the sink if you want), but whenever you want to play a click you could clear the sink and append the clicking source to it. Alternatively, you could create a source that chains the clicking sound with something like Zero and have it repeat indefinitely, but have the Zero source play until you receive a new signal to play the click audio (and stop the source once you’re done clicking).

I think how you should approach this depends on your architecture, so I’ll give a couple approaches I would consider if I were trying to do this myself:

  1. For a blocking approach: I’d play the click sound once (using .append on the sink, for example), then use Instant::now() - last_instant and pass whatever duration is left to wait off to thread::sleep. This would look something like this (pseudo-ish code):

    let mut audio_started = Instant::now();
    for _click_idx in 0..num_clicks {
        sink.append(click_sound.clone()); // you can buffer the click_sound source and clone the buffer using .buffered() if needed
        let remaining = Instant::now() - audio_started;
        if remaining > Duration::ZERO {
            std::thread::sleep(remaining);
        }
    }
    
  2. For a non-blocking approach where you have a separate thread managing the audio, I’d use a channel or similar as a signal for when to play the click. Your thread could then wait until that signal is received and append the click sound to the sink. You’d basically have a thread dedicated to managing the audio in this case. If you want a more complicated version this as an example, here’s a project where we used rodio with tauri (like you) to queue up audio sources to be played on demand whenever the user clicks certain buttons in the UI. The general architecture is the same - just a for loop listening for events from a channel, and then using those events to add sources to our output stream (though you can just use a Sink I believe to keep things simple).


If you want the source to repeat indefinitely, you can try calling repeat_infinite on it. Combine that with pausable/stoppable, and use periodic_access to occasionally check whether the audio source should be paused/stopped probably by using an Arc[AtomicBool] (using square brackets because Lemmy hates angle ones).

It could look something like this:

let src = ...;
let stop = Arc::new(AtomicBool::default());
let stop2 = stop.clone();
let src = src
    .repeat_infinite()
    .stoppable()
    .periodic_access(Duration::from_millis(50), move |src| {
        if stop2.load(Ordering::Relaxed) {
            src.stop();
        }
    });

// later on...
stop.store(true, Ordering::Relaxed);

periodic_access is also how Sink controls the source when you want to pause/stop/etc it. You could probably use Sink directly if you want more control over the source.


A lot of nice QoL changes (checking for missing feature flags, for example) but the thing that stood out to me the most was impl Sync for mpsc::Sender. This has always been a pain point in my opinion for the standard library’s channels, but now that they’re using crossbeam-channel internally, there’s no need to add it as a dependency anymore.

I think some people will be upset by them dropping support for older Windows versions. I can see why they would not want to continue support for them though, it takes extra work to maintain compatibility for those old OS versions and the vast majority of users (by percentage) are on 10/11 now. Still, a shame.


Looks like labels don’t work on async blocks, but using a nested inner block does work. Also, yeah, async blocks only exist to create Futures, they don’t execute until you start polling them. I don’t really see any reason why you couldn’t stick a label on the async block itself though, breaking would at worst just create one new state for the future - early returned.


Also worth mentioning, but you can early-return from a block in rust too, just using the break keyword and named blocks:

let x = 'my_block {
    if thing() { break 'my_block 1; }
    2
};

Edit: I haven’t tried this with async blocks, but I’m guessing it works there too?