tag:blogger.com,1999:blog-19725519.post4795895098451857372..comments2024-02-01T05:22:59.677-05:00Comments on Erik Engbrecht's Blog: Improving Schedulers for High-level, Fine-grained Concurrency FrameworksErik Engbrechthttp://www.blogger.com/profile/11174963559600768092noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-19725519.post-66862446893333697242010-08-15T12:41:00.861-04:002010-08-15T12:41:00.861-04:00Clojure and fork-join: the work is in the "pa...Clojure and fork-join: the work is in the "par" branch but I haven't found a detailed description of how it works.<br /><br />http://clojure-log.n01se.net/date/2010-06-19.html#11:31<br /><br />http://github.com/richhickey/clojure/tree/parGene Tanihttps://www.blogger.com/profile/07627225095848131967noreply@blogger.comtag:blogger.com,1999:blog-19725519.post-81443907605209728152010-08-12T18:19:40.331-04:002010-08-12T18:19:40.331-04:00Good discussion, glad to see that others recognize...Good discussion, glad to see that others recognize this potentially dangerous performance issue for Actors.<br /><br />Your improved FJ sounds like its doing what Haller/Ordesky originally described their scheduler to be like in the paper describing actors. <br /><br />Here they say:<br /><i><br />The basic idea is that actors provide the scheduler with life-beats during their<br />execution. That is, the send (!) and receive methods call a tick method of<br />the scheduler. The scheduler then looks up the worker thread which is currently<br />executing the corresponding actor, and updates its time stamp. When a new<br />receiver task item is submitted to the scheduler, it rst checks if all worker<br />threads are blocked. Worker threads with \recent" time stamps are assumed not<br />to be blocked. Only if all worker threads are assumed to be blocked (because<br />of old time stamps), a new worker thread is created.<br /></i> <br /><br />The problem I see in any 'growing' implementation is if the code is not 'blocked', but rather busy with a CPU intensive task.<br /><br />You end up creating MORE threads which would slow down performance due to the context switches.Unknownhttps://www.blogger.com/profile/11423741726738312940noreply@blogger.comtag:blogger.com,1999:blog-19725519.post-17488552483439800212010-08-11T16:50:55.685-04:002010-08-11T16:50:55.685-04:00"So I don't think this is a special probl..."So I don't think this is a special problem."<br /><br />No indeed and that's really the point - you're making things a little better maybe but there's still a bunch of things about the overall approach of blocking I/O and thread pools that aren't easily addressed.<br /><br />So for example whilst you've done a better job of smartly allocating threads and perhaps scaling I/O code a little better in some cases the trade off is that you can falsely exhaust thread resources when the real problem is I/O bottleneck leading to errors that misdirect developers.<br /><br />"A secondary problem with non-blocking IO, at least on the JVM, is that it is slower."<br /><br />Non-blocking I/O might be slower but I/O is slow compared to CPU and thus burning up lots of threads and blocking them might work better for small loads but it doesn't work so well for large concurrent network loads. Concurrent I/O on threads doesn't work so well for transactional actions against disk either where batching of operations to minimise impact of disk syncs is required.<br /><br />"but primary the problem with making all I/O non-blocking is that it requires a significant paradigm shift."<br /><br />Equally not shifting paradigms leaves you to lots of dark corners and thread pooling strategies that can never quite cover them all off unless the developer lends a hand. Something which jherber hits on:<br /><br />"Stepping back, right now we are asking schedulers to read minds."PetrolHeadhttps://www.blogger.com/profile/06404572533828179184noreply@blogger.comtag:blogger.com,1999:blog-19725519.post-76079495700511935572010-08-11T10:01:18.137-04:002010-08-11T10:01:18.137-04:00Erik, that's a great optimization on current s...Erik, that's a great optimization on current scheduling.<br /><br />Stepping back, right now we are asking schedulers to read minds. Longer term we could add typing at task level. This way blocking lO, short, and long running tasks are type checked to run on appropriate schedulers.<br /><br />Tasks should also be composable, so that composing elements of the executing task are scheduled appropriately and optimally. At this point, we may as well throw in dependency or independence between composable elements, for further optimization.<br /><br />The JVM would have to help with the last leg of optimization. If we could understand the amount of data moved between threads (cores) for a type of task, the cost of context switch on a type of task, and the cost graph between context changes and data passing of the underlying computer architecture. We might have a chance at making Scala's next generation of task scheduling libraries machine level optimal.jherbernoreply@blogger.comtag:blogger.com,1999:blog-19725519.post-68354456895618977532010-08-10T21:23:59.903-04:002010-08-10T21:23:59.903-04:00@bsdemon
Yes, but primary the problem with making ...@bsdemon<br />Yes, but primary the problem with making all I/O non-blocking is that it requires a significant paradigm shift. The problem with having a runtime with special support is...you have to have a specialized runtime that won't run tons of existing code. A secondary problem with non-blocking IO, at least on the JVM, is that it is slower.<br /><br />@PetrolHead<br />The problem you hit is that you run out of memory, just like what happens to all sorts of theoretically correct code when run on a machine with finite resources. The only substantial overhead associated with each thread is its stack (which is preallocated with the thread is created) and its task dequeue (which is likely small relative to the stack). So I don't think this is a special problem.Erik Engbrechthttps://www.blogger.com/profile/11174963559600768092noreply@blogger.comtag:blogger.com,1999:blog-19725519.post-13404053611320555552010-08-10T07:42:30.299-04:002010-08-10T07:42:30.299-04:00"and grows the pool it tasks appear to be bei..."and grows the pool it tasks appear to be being starved."<br /><br />Of course you can't endlessly grow the pool and so for particularly "ill" user code you'll still hit a problem.PetrolHeadhttps://www.blogger.com/profile/06404572533828179184noreply@blogger.comtag:blogger.com,1999:blog-19725519.post-36663890152180338812010-08-09T13:22:23.999-04:002010-08-09T13:22:23.999-04:00Hmm... it seems best result we can achieve by maki...Hmm... it seems best result we can achieve by making all I/O non-blocking, this is how it's done in Erlang BEAM and GHC runtime. All this things are satisifed in these runtimes.bsdemonhttps://www.blogger.com/profile/00020012191430542805noreply@blogger.com