Practical Abstractions for Dynamic and Parallel Software. Umut A. Acar talk.

Dynamic Part
  1. Part of the focus is that as data changes, some computation on that data changes slowly over time, he discusses using a dynamic approach which does not recompute from scratch
    1. Can make an intractable problem tractable, example in dynamic  points forming convex hulls goes from years to seconds
  2. Says dynamic algorithms designed for particular problems generally lags 10 years after static
  3. Domains that allow dynamic algorithms are stable, which means that similar inputs yield similar computations (in terms of actual execution)
  4. Idea is to represent execution as data structures that change over time
    1. Idea is based on combination of computational models of Turing and Church, a tree that records all the processing performed
  5. Idea is that it can be figured out what change in input causes changes in what parts of the execution tree, and only the modified part needs to be recomputed
  6. Running time is proportional to the stability of the problem, how much execution is shared
  7. Get two orders of magnitude saving in some cases
  8. Run [specially annotated, and specially restructured] code through a special compiler that produces a self-adjusting executable
  9. Because the computation is so fast (orders of magnitude), it has allowed for the solving of some open problems:
    1. live computation of convex hulls in higher dimensions on dynamic data
    2. Dynamic meshing: triangulate a dynamically changing 3d mesh s.t. no triangle has small angles (because small angles cause floating pt instability).  Here the result is close to optimal (not manipluation of a shape, but deletion and random rea-dding of pts)
    3. Probabilistic inference
  10. Most of these results just leverage an existing result, figuring out that its stable, and then making it dynamic

Parallelism Part

  1. Basic point is that the overhead involved in parallelizing code is actually significant, finding the tradeoff where parallelization  actually pays off is called the Granularity-Control Problem
  2. Have some methods that try to resolve the issue, speedups are significant, but not linear in number of additional cores

Dynamic Parallel

  1. Dynamic and parallel issues are “duals” of the same problem
  2. Can get the benefit of both by programming for just one
  3. Looking at doing Map-Reduce by this method, needed to make it stable with a small change, then implemented it in Hadoop
  4. Results for Hadoop just makes it faster, no end used changes needed.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: