Multiple specialization, iterator patterns in Rust -
Learning rust (yay!) And I'm trying to understand idiomatic programming as essential for some iterator patterns, while top scoring performance. Note: Rust is not an Iterator feature, there is only one method which I have accepted to close and applied it to some data, I am pulling out of disk / memory.
I was delighted to see that Jung (+ LLV?) I took an iterator written for sparse matrix entries, and one stop to multiply sparse matrix vector,
iterator.map_edges ({| x, y | dst [y] + = Src [x]}); and underlines the body of the closure in the generated code. It went very fast: D
If I make two of these, or use it for the first time (not a problem of correctness), each instance is slow enough (this case Approximately 2x in), possibly because the adapter number now chooses to gain expertise because of many call sites, and you are calling function for each element.
I am trying to understand if there are idiomatic patterns which have a pleasant experience (I at least without sacrificing performance) my choices (no one to satisfy this obstacle) ):
- Accept hard performance (2x is not slow, but not a prize).
- Ask the user to supply a batch-oriented closing, so acting on an Iterator on a small batch of data. This is to highlight some of the Iterator's interns (the data is compressed well, and the user needs to know how to open them, or the itater needs to open a bull in memory).
- Generate a hypothetical
EdgeMapClosure property in a type, and apply the user to inline for each closure thus implementing the map_edges Say no to the test, but I think it exposes different methods for LLVM, each of which is well-outlined. Downside is that the user has to write his closure (packing the related state etc.). - Awesome hacks, like make different methods
map_edges0 , map_edges1 , .... Or use common parameters to separate programmer methods, but which is otherwise ignored. In non-solutions "just use for pair in" iterator.iter () "{/ * * /} "; This data / work- There is a press job for parallel platforms, and instead of capturing the performance of the main thread, Threads want to capture / take these closures to work. It may be that the method I use should be written above, Lambda / closing should be put it , and ship it instead ? In an ideal world, it would be great to have a pattern that generates every event of map_edges in the source file, resulting in different specific methods in binary May be the result of, without optimizing the entire project at some horror level. I am coming out of an unpleasant connection with managed languages and JIT where it is the only way to get the generic But Jung and LLVM seem quite magical, I thought it could be a good way. How ratta's iterators handle it to inline its closed body? Or is not it (they should!)?
It seems that the solution to the problem is new to the root of
Conclusion Produces the kind.
Comments
Post a Comment